Totally optimal decision rules
Amin, Talha
2017-11-22
Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.
Totally optimal decision rules
Amin, Talha M.; Moshkov, Mikhail
2017-01-01
Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.
Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions
Alsolami, Fawaz
2016-04-25
‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important areas of applications are knowledge extraction and representation. The benefit of considering inhibitory rules is connected with the fact that in some situations they can describe more knowledge than the decision ones. Decision tables with many-valued decisions arise in combinatorial optimization, computational geometry, fault diagnosis, and especially under the processing of data sets. In this thesis, various examples of real-life problems are considered which help to understand the motivation of the investigation. We extend relatively simple results obtained earlier for decision rules over decision tables with many-valued decisions to the case of inhibitory rules. The behavior of Shannon functions (which characterize complexity of rule systems) is studied for finite and infinite information systems, for global and local approaches, and for decision and inhibitory rules. The extensions of dynamic programming for the study of decision rules over decision tables with single-valued decisions are generalized to the case of decision tables with many-valued decisions. These results are also extended to the case of inhibitory rules. As a result, we have algorithms (i) for multi-stage optimization of rules relative to such criteria as length or coverage, (ii) for counting the number of optimal rules, (iii) for construction of Pareto optimal points for bi-criteria optimization problems, (iv) for construction of graphs describing relationships between two cost functions, and (v) for construction of graphs describing relationships between cost and accuracy of rules. The applications of created tools include comparison (based on information about Pareto optimal points) of greedy heuristics for bi-criteria optimization of rules
Classifiers based on optimal decision rules
Amin, Talha
2013-11-25
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
Classifiers based on optimal decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions
Alsolami, Fawaz
2016-01-01
‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important
Dynamic programming approach for partial decision rule optimization
Amin, Talha
2012-10-04
This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.
Dynamic programming approach for partial decision rule optimization
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2012-01-01
This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.
Optimization of decision rule complexity for decision tables with many-valued decisions
Azad, Mohammad
2013-10-01
We describe new heuristics to construct decision rules for decision tables with many-valued decisions from the point of view of length and coverage which are enough good. We use statistical test to find leaders among the heuristics. After that, we compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems to be a promising result. © 2013 IEEE.
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha
2013-02-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.
Optimization of β-decision rules relative to number of misclassifications
Zielosko, Beata
2012-01-01
In the paper, we present an algorithm for optimization of approximate decision rules relative to the number of misclassifications. The considered algorithm is based on extensions of dynamic programming and constructs a directed acyclic graph Δ β (T). Based on this graph we can describe the whole set of so-called irredundant β-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 Springer-Verlag.
Optimization of approximate decision rules relative to number of misclassifications
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2012-01-01
In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.
Optimization of approximate decision rules relative to number of misclassifications
Amin, Talha
2012-12-01
In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.
Dynamic Programming Approach for Exact Decision Rule Optimization
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from
Dynamic Programming Approach for Exact Decision Rule Optimization
Amin, Talha
2013-01-01
This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.
Optimization of decision rules based on dynamic programming approach
Zielosko, Beata
2014-01-14
This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number
Design and Analysis of Decision Rules via Dynamic Programming
Amin, Talha M.
2017-04-24
The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods. Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions. Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy.
Optimization and analysis of decision trees and rules: Dynamic programming approach
Alkhalid, Abdulaziz
2013-08-01
This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.
Optimization and analysis of decision trees and rules: Dynamic programming approach
Alkhalid, Abdulaziz; Amin, Talha M.; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.
Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J
2011-05-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.
Rules of Thumb in Life-Cycle Saving Decisions
Winter, Joachim; Schlafmann, Kathrin; Rodepeter, Ralf
2011-01-01
We analyse life-cycle saving decisions when households use simple heuristics, or rules of thumb, rather than solve the underlying intertemporal optimization problem. We simulate life-cycle saving decisions using three simple rules and compute utility losses relative to the solution of the optimization problem. Our simulations suggest that utility losses induced by following simple decision rules are relatively low. Moreover, the two main saving motives re ected by the canonical life-cyc...
Amin, Talha
2013-01-01
In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.
Optimization of decision rule complexity for decision tables with many-valued decisions
Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems
Directory of Open Access Journals (Sweden)
Mitsuhiro Nakamura
2016-07-01
Full Text Available In strategic situations, humans infer the state of mind of others, e.g., emotions or intentions, adapting their behavior appropriately. Nonetheless, evolutionary studies of cooperation typically focus only on reaction norms, e.g., tit for tat, whereby individuals make their next decisions by only considering the observed outcome rather than focusing on their opponent’s state of mind. In this paper, we analyze repeated two-player games in which players explicitly infer their opponent’s unobservable state of mind. Using Markov decision processes, we investigate optimal decision rules and their performance in cooperation. The state-of-mind inference requires Bayesian belief calculations, which is computationally intensive. We therefore study two models in which players simplify these belief calculations. In Model 1, players adopt a heuristic to approximately infer their opponent’s state of mind, whereas in Model 2, players use information regarding their opponent’s previous state of mind, obtained from external evidence, e.g., emotional signals. We show that players in both models reach almost optimal behavior through commitment-like decision rules by which players are committed to selecting the same action regardless of their opponent’s behavior. These commitment-like decision rules can enhance or reduce cooperation depending on the opponent’s strategy.
Design and Analysis of Decision Rules via Dynamic Programming
Amin, Talha M.
2017-01-01
Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules
Decision rules for decision tables with many-valued decisions
Chikalov, Igor
2011-01-01
In the paper, authors presents a greedy algorithm for construction of exact and partial decision rules for decision tables with many-valued decisions. Exact decision rules can be \\'over-fitted\\', so instead of exact decision rules with many attributes, it is more appropriate to work with partial decision rules with smaller number of attributes. Based on results for set cover problem authors study bounds on accuracy of greedy algorithm for exact and partial decision rule construction, and complexity of the problem of minimization of decision rule length. © 2011 Springer-Verlag.
Optimal offering and operating strategies for wind-storage systems with linear decision rules
DEFF Research Database (Denmark)
Ding, Huajie; Pinson, Pierre; Hu, Zechun
2016-01-01
The participation of wind farm-energy storage systems (WF-ESS) in electricity markets calls for an integrated view of day-ahead offering strategies and real-time operation policies. Such an integrated strategy is proposed here by co-optimizing offering at the day-ahead stage and operation policy...... to be used at the balancing stage. Linear decision rules are seen as a natural approach to model and optimize the real-time operation policy. These allow enhancing profits from balancing markets based on updated information on prices and wind power generation. Our integrated strategies for WF...
Decision rules for decision tables with many-valued decisions
Chikalov, Igor; Zielosko, Beata
2011-01-01
In the paper, authors presents a greedy algorithm for construction of exact and partial decision rules for decision tables with many-valued decisions. Exact decision rules can be 'over-fitted', so instead of exact decision rules with many attributes
Optimization of inhibitory decision rules relative to length and coverage
Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2012-01-01
The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation
Do Group Decision Rules Affect Trust? A Laboratory Experiment on Group Decision Rules and Trust
DEFF Research Database (Denmark)
Nielsen, Julie Hassing
2016-01-01
Enhanced participation has been prescribed as the way forward for improving democratic decision making while generating positive attributes like trust. Yet we do not know the extent to which rules affect the outcome of decision making. This article investigates how different group decision rules......-hierarchical decision-making procedures enhance trust vis-à-vis other more hierarchical decision-making procedures....... affect group trust by testing three ideal types of decision rules (i.e., a Unilateral rule, a Representative rule and a 'Non-rule') in a laboratory experiment. The article shows significant differences between the three decision rules on trust after deliberation. Interestingly, however, it finds...
Optimization of inhibitory decision rules relative to length and coverage
Alsolami, Fawaz
2012-01-01
The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation "attribute = value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. © 2012 Springer-Verlag.
Schaafsma, M.; Deijl, W. van der; Smits, J.M.M.; Rahmel, A.O.; Vries Robbé, P.F. de; Hoitsma, A.J.
2011-01-01
Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with
Optimization of Approximate Inhibitory Rules Relative to Number of Misclassifications
Alsolami, Fawaz
2013-10-04
In this work, we consider so-called nonredundant inhibitory rules, containing an expression “attribute:F value” on the right- hand side, for which the number of misclassifications is at most a threshold γ. We study a dynamic programming approach for description of the considered set of rules. This approach allows also the optimization of nonredundant inhibitory rules relative to the length and coverage. The aim of this paper is to investigate an additional possibility of optimization relative to the number of misclassifications. The results of experiments with decision tables from the UCI Machine Learning Repository show this additional optimization achieves a fewer misclassifications. Thus, the proposed optimization procedure is promising.
Optimal Sequential Rules for Computer-Based Instruction.
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Sendi, Pedram; Al, Maiwenn J; Gafni, Amiram; Birch, Stephen
2004-05-01
Bridges and Terris (Soc. Sci. Med. (2004)) critique our paper on the alternative decision rule of economic evaluation in the presence of uncertainty and constrained resources within the context of a portfolio of health care programs (Sendi et al. Soc. Sci. Med. 57 (2003) 2207). They argue that by not adopting a formal portfolio theory approach we overlook the optimal solution. We show that these arguments stem from a fundamental misunderstanding of the alternative decision rule of economic evaluation. In particular, the portfolio theory approach advocated by Bridges and Terris is based on the same theoretical assumptions that the alternative decision rule set out to relax. Moreover, Bridges and Terris acknowledge that the proposed portfolio theory approach may not identify the optimal solution to resource allocation problems. Hence, it provides neither theoretical nor practical improvements to the proposed alternative decision rule.
Simultaneous Optimization of Decisions Using a Linear Utility Function.
Vos, Hans J.
1990-01-01
An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)
Decision mining revisited - Discovering overlapping rules
Mannhardt, Felix; De Leoni, Massimiliano; Reijers, Hajo A.; Van Der Aalst, Wil M P
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Decision Mining Revisited - Discovering Overlapping Rules
Mannhardt, F.; De Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.; Nurcan, S.; Soffer, P.; Bajec, M.; Eder, J.
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Rule-based decision making model
International Nuclear Information System (INIS)
Sirola, Miki
1998-01-01
A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)
Conformance Testing: Measurement Decision Rules
Mimbs, Scott M.
2010-01-01
The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement
Decision Mining Revisited – Discovering Overlapping Rules
Mannhardt, F.; de Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.
2016-01-01
Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,
Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining
Hussain, Shahid
2016-01-01
This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.
Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining
Hussain, Shahid
2016-07-10
This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.
Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis
Chikalov, Igor
2017-10-19
This paper is devoted to the study of bi-criteria optimization problems for decision trees. We consider different cost functions such as depth, average depth, and number of nodes. We design algorithms that allow us to construct the set of Pareto optimal points (POPs) for a given decision table and the corresponding bi-criteria optimization problem. These algorithms are suitable for investigation of medium-sized decision tables. We discuss three examples of applications of the created tools: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees, and comparison of different greedy algorithms for decision tree construction as single- and bi-criteria optimization algorithms.
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.
Relationships among various parameters for decision tree optimization
Hussain, Shahid
2014-01-14
In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.
Relationships among various parameters for decision tree optimization
Hussain, Shahid
2014-01-01
In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to
Transformative decision rules, permutability, and non-sequential framing of decision problems
Peterson, M.B.
2004-01-01
The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A
International Nuclear Information System (INIS)
Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.
2014-01-01
Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples
The optimum decision rules for the oddity task.
Versfeld, N J; Dai, H; Green, D M
1996-01-01
This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur with "roved" or "fixed" experiments (Macmillan & Creelman, 1991, p. 147). It is shown that the commonly used decision rule for an m-interval oddity task corresponds to the special case of highly correlated observations. However, as is also true for the same-different paradigm, there exists a different optimum decision rule when the observations are independent. The relation between the probability of a correct response and d' is derived for the three-interval oddity task. Tables are presented of this relation for the three-, four-, and five-interval oddity task. Finally, an experimental method is proposed that allows one to determine the decision rule used by the observer in an oddity experiment.
Decision rules and group rationality: cognitive gain or standstill?
Directory of Open Access Journals (Sweden)
Petru Lucian Curşeu
Full Text Available Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group, and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.
Decision rules and group rationality: cognitive gain or standstill?
Curşeu, Petru Lucian; Jansen, Rob J G; Chappin, Maryse M H
2013-01-01
Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group), and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members) than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.
Robust Management of Combined Heat and Power Systems via Linear Decision Rules
DEFF Research Database (Denmark)
Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik
2014-01-01
The heat and power outputs of Combined Heat and Power (CHP) units are jointly constrained. Hence, the optimal management of systems including CHP units is a multicommodity optimization problem. Problems of this type are stochastic, owing to the uncertainty inherent both in the demand for heat and...... linear decision rules to guarantee both tractability and a correct representation of the dynamic aspects of the problem. Numerical results from an illustrative example confirm the value of the proposed approach....
Directory of Open Access Journals (Sweden)
Dustin G. Mark
2015-10-01
Full Text Available Introduction: Application of a clinical decision rule for subarachnoid hemorrhage, in combination with cranial computed tomography (CT performed within six hours of ictus (early cranial CT, may be able to reasonably exclude a diagnosis of aneurysmal subarachnoid hemorrhage (aSAH. This study’s objective was to examine the sensitivity of both early cranial CT and a previously validated clinical decision rule among emergency department (ED patients with aSAH and a normal mental status. Methods: Patients were evaluated in the 21 EDs of an integrated health delivery system between January 2007 and June 2013. We identified by chart review a retrospective cohort of patients diagnosed with aSAH in the setting of a normal mental status and performance of early cranial CT. Variables comprising the SAH clinical decision rule (age >40, presence of neck pain or stiffness, headache onset with exertion, loss of consciousness at headache onset were abstracted from the chart and assessed for inter-rater reliability. Results: One hundred fifty-five patients with aSAH met study inclusion criteria. The sensitivity of early cranial CT was 95.5% (95% CI [90.9-98.2]. The sensitivity of the SAH clinical decision rule was also 95.5% (95% CI [90.9-98.2]. Since all false negative cases for each diagnostic modality were mutually independent, the combined use of both early cranial CT and the clinical decision rule improved sensitivity to 100% (95% CI [97.6-100.0]. Conclusion: Neither early cranial CT nor the SAH clinical decision rule demonstrated ideal sensitivity for aSAH in this retrospective cohort. However, the combination of both strategies might optimize sensitivity for this life-threatening disease.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Assessing predation risk: optimal behaviour and rules of thumb.
Welton, Nicky J; McNamara, John M; Houston, Alasdair I
2003-12-01
We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.
Azad, Mohammad
2016-10-20
The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.
Azad, Mohammad; Moshkov, Mikhail; Zielosko, Beata
2016-01-01
The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.
Decision rule classifiers for multi-label decision tables
Alsolami, Fawaz
2014-01-01
Recently, multi-label classification problem has received significant attention in the research community. This paper is devoted to study the effect of the considered rule heuristic parameters on the generalization error. The results of experiments for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International Publishing.
The optimum decision rules for the oddity task
Versfeld, N.J.; Dai, H.; Green, D.M.
1996-01-01
This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur
Business Rules Definition for Decision Support System Using Matrix Grammar
Directory of Open Access Journals (Sweden)
Eva Zámečníková
2016-06-01
Full Text Available This paper deals with formalization of business rules by formal grammars. In our work we focus on methods for high frequency data processing. We process data by using complex event platforms (CEP which allow to process high volume of data in nearly real time. Decision making process is contained by one level of processing of CEP. Business rules are used for decision making process description. For the business rules formalization we chose matrix grammar. The use of formal grammars is quite natural as the structure of rules and its rewriting is very similar both for the business rules and for formal grammar. In addition the matrix grammar allows to simulate dependencies and correlations between the rules. The result of this work is a model for data processing of knowledge-based decision support system described by the rules of formal grammar. This system will support the decision making in CEP. This solution may contribute to the speedup of decision making process in complex event processing and also to the formal verification of these systems.
Directory of Open Access Journals (Sweden)
Atif Shahzad
2016-02-01
Full Text Available A promising approach for an effective shop scheduling that synergizes the benefits of the combinatorial optimization, supervised learning and discrete-event simulation is presented. Though dispatching rules are in widely used by shop scheduling practitioners, only ordinary performance rules are known; hence, dynamic generation of dispatching rules is desired to make them more effective in changing shop conditions. Meta-heuristics are able to perform quite well and carry more knowledge of the problem domain, however at the cost of prohibitive computational effort in real-time. The primary purpose of this research lies in an offline extraction of this domain knowledge using decision trees to generate simple if-then rules that subsequently act as dispatching rules for scheduling in an online manner. We used similarity index to identify parametric and structural similarity in problem instances in order to implicitly support the learning algorithm for effective rule generation and quality index for relative ranking of the dispatching decisions. Maximum lateness is used as the scheduling objective in a job shop scheduling environment.
Decision rule classifiers for multi-label decision tables
Alsolami, Fawaz; Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail
2014-01-01
for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International
Testing Decision Rules for Multiattribute Decision Making
Seidl, C.; Traub, S.
1996-01-01
This paper investigates the existence of an editing phase and studies the com- pliance of subjects' behaviour with the most popular multiattribute decision rules. We observed that our data comply well with the existence of an editing phase, at least if we allow for a natural error rate of some 25%.
An overview of bipolar qualitative decision rules
Bonnefon, Jean-Francois; Dubois, Didier; Fargier, Hélène
Making a good decision is often a matter of listing and comparing positive and negative arguments, as studies in cognitive psychology have shown. In such cases, the evaluation scale should be considered bipolar, that is, negative and positive values are explicitly distinguished. Generally, positive and negative features are evaluated separately, as done in Cumulative Prospect Theory. However, contrary to the latter framework that presupposes genuine numerical assessments, decisions are often made on the basis of an ordinal ranking of the pros and the cons, and focusing on the most salient features, i.e., the decision process is qualitative. In this paper, we report on a project aiming at characterizing several decision rules, based on possibilistic order of magnitude reasoning, and tailored for the joint handling of positive and negative affects, and at testing their empirical validity. The simplest rules can be viewed as extensions of the maximin and maximax criteria to the bipolar case and, like them, suffer from a lack of discrimination power. More decisive rules that refine them are also proposed. They account for both the principle of Pareto-efficiency and the notion of order of magnitude reasoning. The most decisive one uses a lexicographic ranking of the pros and cons. It comes down to a special case of Cumulative Prospect Theory, and subsumes the “Take the best” heuristic.
Reservoir Operating Rule Optimization for California's Sacramento Valley
Directory of Open Access Journals (Sweden)
Timothy Nelson
2016-03-01
Full Text Available doi: http://dx.doi.org/10.15447/sfews.2016v14iss1art6Reservoir operating rules for water resource systems are typically developed by combining intuition, professional discussion, and simulation modeling. This paper describes a joint optimization–simulation approach to develop preliminary economically-based operating rules for major reservoirs in California’s Sacramento Valley, based on optimized results from CALVIN, a hydro-economic optimization model. We infer strategic operating rules from the optimization model results, including storage allocation rules to balance storage among multiple reservoirs, and reservoir release rules to determine monthly release for individual reservoirs. Results show the potential utility of considering previous year type on water availability and various system and sub-system storage conditions, in addition to normal consideration of local reservoir storage, season, and current inflows. We create a simple simulation to further refine and test the derived operating rules. Optimization model results show particular insights for balancing the allocation of water storage among Shasta, Trinity, and Oroville reservoirs over drawdown and refill seasons, as well as some insights for release rules at major reservoirs in the Sacramento Valley. We also discuss the applicability and limitations of developing reservoir operation rules from optimization model results.
Concurrent approach for evolving compact decision rule sets
Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.
1999-02-01
The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.
Rough set and rule-based multicriteria decision aiding
Directory of Open Access Journals (Sweden)
Roman Slowinski
2012-08-01
Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.
WINE ADVISOR EXPERT SYSTEM USING DECISION RULES
Directory of Open Access Journals (Sweden)
Dinuca Elena Claudia
2013-07-01
Full Text Available In this article I focus on developing an expert system for advising the choice of wine that best matches a specific occasion. An expert system is a computer application that performs a task that would be performed by a human expert. The implementation is done using Delphi programming language. I used to represent the knowledge bases a set of rules. The rules are of type IF THEN ELSE rules, decision rules based on different important wine features.
Decision Analysis of Dynamic Spectrum Access Rules
Energy Technology Data Exchange (ETDEWEB)
Juan D. Deaton; Luiz A. DaSilva; Christian Wernz
2011-12-01
A current trend in spectrum regulation is to incorporate spectrum sharing through the design of spectrum access rules that support Dynamic Spectrum Access (DSA). This paper develops a decision-theoretic framework for regulators to assess the impacts of different decision rules on both primary and secondary operators. We analyze access rules based on sensing and exclusion areas, which in practice can be enforced through geolocation databases. Our results show that receiver-only sensing provides insufficient protection for primary and co-existing secondary users and overall low social welfare. On the other hand, using sensing information between the transmitter and receiver of a communication link, provides dramatic increases in system performance. The performance of using these link end points is relatively close to that of using many cooperative sensing nodes associated to the same access point and large link exclusion areas. These results are useful to regulators and network developers in understanding in developing rules for future DSA regulation.
Directory of Open Access Journals (Sweden)
Thair M. Al-Taiee
2013-05-01
Full Text Available To obtain optimal operating rules for storage reservoirs, large numbers of simulation and optimization models have been developed over the past several decades, which vary significantly in their mechanisms and applications. Rule curves are guidelines for long term reservoir operation. An efficient technique is required to find the optimal rule curves that can mitigate water shortage in long term operation. The investigation of developed Genetic Algorithm (GA technique, which is an optimization approach base on the mechanics of natural selection, derived from the theory of natural evolution, was carried out to through the application to predict the daily rule curve of Mosul regulating reservoir in Iraq. Record daily inflows, outflow, water level in the reservoir for 19 year (1986-1990 and (1994-2007 were used in the developed model for assessing the optimal reservoir operation. The objective function is set to minimize the annual sum of squared deviation from the desired downstream release and desired storage volume in the reservoir. The decision variables are releases, storage volume, water level and outlet (demand from the reservoir. The results of the GA model gave a good agreement during the comparison with the actual rule curve and the designed rating curve of the reservoir. The simulated result shows that GA-derived policies are promising and competitive and can be effectively used for daily reservoir operation in addition to the rational monthly operation and predicting also rating curve of reservoirs.
Online learning algorithm for ensemble of decision rules
Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach
Length and coverage of inhibitory decision rules
Alsolami, Fawaz
2012-01-01
Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.
Online learning algorithm for ensemble of decision rules
Chikalov, Igor
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.
Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study
Azad, Mohammad; Zielosko, Beata; Moshkov, Mikhail; Chikalov, Igor
2013-01-01
In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.
Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study
Azad, Mohammad
2013-10-04
In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.
Directory of Open Access Journals (Sweden)
Helena Gaspars-Wieloch
2014-12-01
Full Text Available The paper concerns multicriteria decision making under uncertainty with scenario planning. This topic is explored by many researchers because almost all real-world decision problems have multiple conflicting criteria and a deterministic criteria evaluation is often impossible (e.g. mergers and acquisitions, new product development. We propose two procedures for uncertain multi-objective optimization (for dependent and independent criteria matrices which are based on the SAPO method – a modification of the Hurwicz’s rule for one-criterion problems, recently presented in another paper. The new approaches take into account the decision maker’s preference structure and attitude towards risk. It considers the frequency and the level of extreme evaluations and generates logic rankings for symmetric and asymmetric distributions. The application of the suggested tool is illustrated with an example of marketing strategies selection.
Relationships between length and coverage of decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2014-01-01
The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.
Relationships between length and coverage of decision rules
Amin, Talha
2014-02-14
The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.
Alsolami, Fawaz
2013-01-01
This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach. The results of experiments for decision tables from UCI Machine Learning Repository are discussed. © 2013 Springer-Verlag.
Comparison of Heuristics for Inhibitory Rule Optimization
Alsolami, Fawaz
2014-09-13
Knowledge representation and extraction are very important tasks in data mining. In this work, we proposed a variety of rule-based greedy algorithms that able to obtain knowledge contained in a given dataset as a series of inhibitory rules containing an expression “attribute ≠ value” on the right-hand side. The main goal of this paper is to determine based on rule characteristics, rule length and coverage, whether the proposed rule heuristics are statistically significantly different or not; if so, we aim to identify the best performing rule heuristics for minimization of rule length and maximization of rule coverage. Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.
Comparison of Heuristics for Inhibitory Rule Optimization
Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail
2014-01-01
Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.
Mathematical optimization of incore nuclear fuel management decisions: Status and trends
International Nuclear Information System (INIS)
Turinsky, P.J.
1999-01-01
Nuclear fuel management involves making decisions about the number of fresh assemblies to purchase and their Attributes (e.g. enrichment and burnable poison loading), burnt fuel to reinsert, location of the assemblies in the core (i.e. loading pattern (LP)), and insertion of control rods as a function of cycle exposure (i.e. control rod pattern (CRP)). The out-of-core and incore nuclear fuel management problems denote an artificial separation of decisions to simplify the decisionmaking. The out-of-core problem involves multicycle analysis so that levelized fuel cycle cost can be evaluated; whereas, the incore problem normally involves single cycle analysis. Decision variables for the incore problem normally include all of the above noted decisions with the exception of the number of fresh assemblies, which is restricted by discharge burnup limits and therefore involves multicycle considerations. This paper reports on the progress that is being made in addressing the incore nuclear fuel management problem utilizing formal mathematical optimization methods. Advances in utilizing the Simulating Annealing, Genetic Algorithm and Tabu Search methods, with applications to pressurized and boiling water reactor incore optimization problem, will be reviewed. Recent work on the addition of multiobjective optimization capability to aide the decision maker, and utilization of heuristic rules and incorporation of parallel algorithms to increase computational efficiency, will be discussed. (orig.) [de
Consultation system with knowledge representation by decision rules
Energy Technology Data Exchange (ETDEWEB)
Senne, E L.F.; Simoni, P O
1982-04-01
The use of decision rules in the representation of empirical knowledge supplied by application domain experts is discussed. Based on this representation, a system is described which employs artificial intelligence techniques to yield inferences within a specific domain. Three modules composing the system are described: the acquisition one, that allows the insertion of new rules; the diagnostic one, that uses rules in the inference process; and the explanation one, that exhibits reasons for each system action.
Azad, Mohammad
2017-06-16
We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.
Azad, Mohammad; Moshkov, Mikhail
2017-01-01
We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.
Rules of thumb in life-cycle savings models
Rodepeter, Ralf; Winter, Joachim
1999-01-01
We analyze life-cycle savings decisions when households use simple heuristics, or rules of thumb, rather than solve the underlying intertemporal optimization problem. The decision rules we explore are a simple Keynesian rule where consumption follows income; a simple consumption rule where only a fraction of positive income shocks is saved; a rule that corresponds to the permanent income hypothesis; and two rules that have been found in experimental studies. Using these rules, we simulate lif...
Optimal Detection under the Restricted Bayesian Criterion
Directory of Open Access Journals (Sweden)
Shujun Liu
2017-07-01
Full Text Available This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.
Portable Rule Extraction Method for Neural Network Decisions Reasoning
Directory of Open Access Journals (Sweden)
Darius PLIKYNAS
2005-08-01
Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.
Application of stochastic optimization to nuclear power plant asset management decisions
International Nuclear Information System (INIS)
Morton, D.; Koc, A.; Hess, S. M.
2013-01-01
We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)
Application of stochastic optimization to nuclear power plant asset management decisions
Energy Technology Data Exchange (ETDEWEB)
Morton, D. [Graduate Program in Operations Research and Industrial Engineering, University of Texas at Austin, Austin, TX, 78712 (United States); Koc, A. [IBM T.J. Watson Research Center, Business Analytics and Mathematical Sciences Dept., 1101 Kitchawan Rd., Yorktown Heights, NY, 10598 (United States); Hess, S. M. [Electric Power Research Institute, 300 Baywood Road, West Chester, PA, 19382 (United States)
2013-07-01
We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)
Unanimity rule and organizational decision-making : a simulation model
Romme, A.G.L.
2004-01-01
Unanimity rule is an important benchmark for evaluating outcomes of decisions in the social sciences. However, organizational researchers tend to ignore unanimous decision making, for example, because unanimity may be difficult to realize in large groups and may suffer from individual participants
Optimization of conventional rule curves coupled with hedging rules for reservoir operation
DEFF Research Database (Denmark)
Taghian, Mehrdad; Rosbjerg, Dan; Haghighi, Ali
2014-01-01
As a common approach to reservoir operating policies, water levels at the end of each time interval should be kept at or above the rule curve. In this study, the policy is captured using rationing of the target yield to reduce the intensity of severe water shortages. For this purpose, a hybrid...... to achieve the optimal water allocation and the target storage levels for reservoirs. As a case study, a multipurpose, multireservoir system in southern Iran is selected. The results show that the model has good performance in extracting the optimum policy for reservoir operation under both normal...... model is developed to optimize simultaneously both the conventional rule curve and the hedging rule. In the compound model, a simple genetic algorithm is coupled with a simulation program, including an inner linear programming algorithm. In this way, operational policies are imposed by priority concepts...
A Swarm Optimization approach for clinical knowledge mining.
Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A
2015-10-01
Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright
Rule Optimization monthly reservoir operation Salvajina
International Nuclear Information System (INIS)
Sandoval Garcia, Maria Clemencia; Santacruz Salazar, Santiago; Ramirez Callejas, Carlos A
2007-01-01
In the present study a model was designed for the optimization of the rule for monthly operation of the Salvajina dam (Colombia) based in the technology) of dynamic programming. The model maximizes the benefits for electric power generation, ensuring at the same time flood regulation in winter and pollution relief during the summer. For the optimization of the rule of operation, it was necessary to define the levels and volumes of reserve and holding required for the control of flood zones in the Cauca river and to provide an effluent minimal flow and assure a daily flow at the Juanchito station (located 141 km downstream from the dam) of the Cauca river, 90 % of the time during the most critical summer periods.
46 CFR 201.3 - Authentication of rules, orders, determinations and decisions of the Administration.
2010-10-01
... 46 Shipping 8 2010-10-01 2010-10-01 false Authentication of rules, orders, determinations and decisions of the Administration. 201.3 Section 201.3 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF....3 Authentication of rules, orders, determinations and decisions of the Administration. All rules...
A simple threshold rule is sufficient to explain sophisticated collective decision-making.
Directory of Open Access Journals (Sweden)
Elva J H Robinson
Full Text Available Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis, in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.
A simple threshold rule is sufficient to explain sophisticated collective decision-making.
Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R
2011-01-01
Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.
Totally optimal decision trees for Boolean functions
Chikalov, Igor
2016-07-28
We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.
International Nuclear Information System (INIS)
Shen, Peihong; Zhao, Zhiguo; Zhan, Xiaowen; Li, Jingwei
2017-01-01
In this paper, an energy management strategy based on logic threshold is proposed for a plug-in hybrid electric vehicle. The plug-in hybrid electric vehicle powertrain model is established using MATLAB/Simulink based on experimental tests of the power components, which is validated by the comparison with the verified simulation model which is built in the AVL Cruise. The influence of the driving torque demand decision on the fuel economy of plug-in hybrid electric vehicle is studied using a simulation. The optimization method for the driving torque demand decision, which refers to the relationship between the accelerator pedal opening and driving torque demand, from the perspective of fuel economy is formulated. The dynamically changing inertia weight particle swarm optimization is used to optimize the decision parameters. The simulation results show that the optimized driving torque demand decision can improve the PHEV fuel economy by 15.8% and 14.5% in the fuel economy test driving cycle of new European driving cycle and worldwide harmonized light vehicles test respectively, using the same rule-based energy management strategy. The proposed optimization method provides a theoretical guide for calibrating the parameters of driving torque demand decision to improve the fuel economy of the real plug-in hybrid electric vehicle. - Highlights: • The influence of the driving torque demand decision on the fuel economy is studied. • The optimization method for the driving torque demand decision is formulated. • An improved particle swarm optimization is utilized to optimize the parameters. • Fuel economy is improved by using the optimized driving torque demand decision.
Investigating decision rules with a new experimental design: the EXACT paradigm
Biscione, Valerio; Harris, Christopher M.
2015-01-01
In the decision-making field, it is important to distinguish between the perceptual process (how information is collected) and the decision rule (the strategy governing decision-making). We propose a new paradigm, called EXogenous ACcumulation Task (EXACT) to disentangle these two components. The paradigm consists of showing a horizontal gauge that represents the probability of receiving a reward at time t and increases with time. The participant is asked to press a button when they want to request a reward. Thus, the perceptual mechanism is hard-coded and does not need to be inferred from the data. Based on this paradigm, we compared four decision rules (Bayes Risk, Reward Rate, Reward/Accuracy, and Modified Reward Rate) and found that participants appeared to behave according to the Modified Reward Rate. We propose a new way of analysing the data by using the accuracy of responses, which can only be inferred in classic RT tasks. Our analysis suggests that several experimental findings such as RT distribution and its relationship with experimental conditions, usually deemed to be the result of a rise-to-threshold process, may be simply explained by the effect of the decision rule employed. PMID:26578916
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Application of decision rules for empowering of Indonesian telematics services SMEs
Tosida, E. T.; Hairlangga, O.; Amirudin, F.; Ridwanah, M.
2018-03-01
The independence of the field of telematics became one of Indonesia's vision in 2024. One effort to achieve it can be done by empowering SMEs in the field of telematics. Empowerment carried out need a practical mechanism by utilizing data centered, including through the National Economic Census database (Susenas). Based on the Susenas can be formulated the decision rules of determining the provision of assistance for SMEs in the field of telematics. The way it did by generating the rule base through the classification technique. The CART algorithm-based decision rule model performs better than C45 and ID3 models. The high level of performance model is also in line with the regulations applied by the government. This becomes one of the strengths of research, because the resulting model is consistent with the existing conditions in Indonesia. The rules base generated from the three classification techniques show different rules. The CART technique has pattern matching with the realization of activities in The Ministry of Cooperatives and SMEs. So far, the government has difficulty in referring data related to the empowerment of SMEs telematics services. Therefore, the findings resulting from this research can be used as an alternative decision support system related to the program of empowerment of SMEs in telematics.
Bressan, Silvia; Romanato, Sabrina; Mion, Teresa; Zanconato, Stefania; Da Dalt, Liviana
2012-07-01
Of the currently published clinical decision rules for the management of minor head injury (MHI) in children, the Pediatric Emergency Care Applied Research Network (PECARN) rule, derived and validated in a large multicenter prospective study cohort, with high methodologic standards, appears to be the best clinical decision rule to accurately identify children at very low risk of clinically important traumatic brain injuries (ciTBI) in the pediatric emergency department (PED). This study describes the implementation of an adapted version of the PECARN rule in a tertiary care academic PED in Italy and evaluates implementation success, in terms of medical staff adherence and satisfaction, as well as its effects on clinical practice. The adapted PECARN decision rule algorithms for children (one for those younger than 2 years and one for those older than 2 years) were actively implemented in the PED of Padova, Italy, for a 6-month testing period. Adherence and satisfaction of medical staff to the new rule were calculated. Data from 356 visits for MHI during PECARN rule implementation and those of 288 patients attending the PED for MHI in the previous 6 months were compared for changes in computed tomography (CT) scan rate, ciTBI rate (defined as death, neurosurgery, intubation for longer than 24 hours, or hospital admission at least for two nights associated with TBI) and return visits for symptoms or signs potentially related to MHI. The safety and efficacy of the adapted PECARN rule in clinical practice were also calculated. Adherence to the adapted PECARN rule was 93.5%. The percentage of medical staff satisfied with the new rule, in terms of usefulness and ease of use for rapid decision-making, was significantly higher (96% vs. 51%, puse of the adapted PECARN rule in clinical practice was 100% (95% CI=36.8 to 100; three of three patients with ciTBI who received CT scan at first evaluation), while efficacy was 92.3% (95% CI=89 to 95; 326 of 353 patients without ci
Zhang, Jiaxiang; Kriegeskorte, Nikolaus; Carlin, Johan D; Rowe, James B
2013-07-17
Behavior is governed by rules that associate stimuli with responses and outcomes. Human and monkey studies have shown that rule-specific information is widely represented in the frontoparietal cortex. However, it is not known how establishing a rule under different contexts affects its neural representation. Here, we use event-related functional MRI (fMRI) and multivoxel pattern classification methods to investigate the human brain's mechanisms of establishing and maintaining rules for multiple perceptual decision tasks. Rules were either chosen by participants or specifically instructed to them, and the fMRI activation patterns representing rule-specific information were compared between these contexts. We show that frontoparietal regions differ in the properties of their rule representations during active maintenance before execution. First, rule-specific information maintained in the dorsolateral and medial frontal cortex depends on the context in which it was established (chosen vs specified). Second, rule representations maintained in the ventrolateral frontal and parietal cortex are independent of the context in which they were established. Furthermore, we found that the rule-specific coding maintained in anticipation of stimuli may change with execution of the rule: representations in context-independent regions remain invariant from maintenance to execution stages, whereas rule representations in context-dependent regions do not generalize to execution stage. The identification of distinct frontoparietal systems with context-independent and context-dependent task rule representations, and the distinction between anticipatory and executive rule representations, provide new insights into the functional architecture of goal-directed behavior.
Ant-based extraction of rules in simple decision systems over ontological graphs
Directory of Open Access Journals (Sweden)
Pancerz Krzysztof
2015-06-01
Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems
International Nuclear Information System (INIS)
Ma Chao; Lian Jijian; Wang Junna
2013-01-01
Highlights: ► Short-term optimal operation of Three-gorge and Gezhouba hydropower stations was studied. ► Key state variable and exact constraints were proposed to improve numerical model. ► Operation rules proposed were applied in population initiation step for faster optimization. ► Culture algorithm with difference evolution was selected as optimization method. ► Model and method proposed were verified by case study with feasible operation solutions. - Abstract: Information hidden in the characteristics and relationship data of a cascade hydropower stations can be extracted by data-mining approaches to be operation rules and optimization support information. In this paper, with Three-gorge and Gezhouba cascade hydropower stations as an example, two operation rules are proposed due to different operation efficiency of water turbines and tight water volume and hydraulic relationship between two hydropower stations. The rules are applied to improve optimization model with more exact decision and state variables and constraints. They are also used in the population initiation step to develop better individuals with culture algorithm with differential evolution as an optimization method. In the case study, total feasible population and the best solution based on an initial population with an operation rule can be obtained with a shorter computation time than that of a pure random initiated population. Amount of electricity generation in a dispatch period with an operation rule also increases with an average increase rate of 0.025%. For a fixed water discharge process of Three-gorge hydropower station, there is a better rule to decide an operation plan of Gezhouba hydropower station in which total hydraulic head for electricity generation is optimized and distributed with inner-plant economic operation considered.
Understanding Optimal Decision-making in Wargaming
Nesbitt, P; Kennedy, Q; Alt, JK; Fricker, RD; Whitaker, L; Yang, J; Appleget, JA; Huston, J; Patton, S
2013-01-01
Approved for public release; distribution is unlimited. This research aims to gain insight into optimal wargaming decision-making mechanisms using neurophysiological measures by investigating whether brain activation and visual scan patterns predict attention, perception, and/or decision-making errors through human-in-the-loop wargaming simulation experiments. We investigate whether brain activity and visual scan patterns can explain optimal wargaming decision making and its devel...
Optimization of tactical decisions: subjective and objective conditionality
Directory of Open Access Journals (Sweden)
Олег Юрійович Булулуков
2016-06-01
Full Text Available In the article «human» and «objective» factors are investigated that influencing on optimization of tactical decisions. Attention is accented on dependence of the got information about the circumstances of crime from the acceptance of correct decisions an investigator. Connection between efficiency of investigation and acceptance of optimal tactical decisions is underlined. The declared problem is not investigational in literature in a sufficient measure. Its separate aspects found the reflection in works: D. А. Solodova, S. Yu. Yakushina and others. Some questions related to optimization of investigation and making decision an investigator we discover in works: R. S. Belkin, V. А. Juravel, V. Е. Konovalova, V. L. Sinchuk, B. V. Shur, V. Yu. Shepitko. The aim of the article is determination of term «optimization», as it applies to tactical decisions in criminalistics, and also consideration of influence of human and objective factors on the acceptance of optimal decisions at investigation of crimes. In the article etymology of term is considered «optimization» and interpretation of its is given as it applies to the acceptance of tactical decisions. The types of mark human and objective factors, stipulating optimization of tactical decisions. The last assists efficiency of tactics of investigation of crimes. At consideration of «human factors» of influencing on optimization decisions, attention applies on «psychological traps» can take place at making decision. Among them such are named, as: anchoring; status quo; irreversible expenses; desired and actual; incorrect formulation; conceit; reinsurance; constancy of memory. Underlined, absence of unambiguity in the brought list over of «objective factors» influencing at choice tactical decision. The different understanding of «tactical risk» is argued, as a factor influencing on an acceptance tactical decisions. The analysis of «human» and «objective» factors influencing on
Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization
Directory of Open Access Journals (Sweden)
Yi Ji
2016-06-01
Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.
Optimized reaction mechanism rate rules for ignition of normal alkanes
Cai, Liming
2016-08-11
The increasing demand for cleaner combustion and reduced greenhouse gas emissions motivates research on the combustion of hydrocarbon fuels and their surrogates. Accurate detailed chemical kinetic models are an important prerequisite for high fidelity reacting flow simulations capable of improving combustor design and operation. The development of such models for many new fuel components and/or surrogate molecules is greatly facilitated by the application of reaction classes and rate rules. Accurate and versatile rate rules are desirable to improve the predictive accuracy of kinetic models. A major contribution in the literature is the recent work by Bugler et al. (2015), which has significantly improved rate rules and thermochemical parameters used in kinetic modeling of alkanes. In the present study, it is demonstrated that rate rules can be used and consistently optimized for a set of normal alkanes including n-heptane, n-octane, n-nonane, n-decane, and n-undecane, thereby improving the predictive accuracy for all the considered fuels. A Bayesian framework is applied in the calibration of the rate rules. The optimized rate rules are subsequently applied to generate a mechanism for n-dodecane, which was not part of the training set for the optimized rate rules. The developed mechanism shows accurate predictions compared with published well-validated mechanisms for a wide range of conditions.
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2013-04-01
Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total
Combining Fuzzy AHP with GIS and Decision Rules for Industrial Site Selection
Directory of Open Access Journals (Sweden)
Aissa Taibi
2017-12-01
Full Text Available This study combines Fuzzy Analytic Hierarchy Process (FAHP, Geographic Information System (GIS and Decision rules to provide decision makers with a ranking model for industrial sites in Algeria. A ranking of the suitable industrial areas is a crucial multi-criteria decision problem based on socio-economical and technical criteria as on environmental considerations. Fuzzy AHP is used for assessment of the candidate industrial sites by combining fuzzy set theory and analytic hierarchy process (AHP. The decision rule base serves as a filter that performs criteria pre-treatment involving a reduction of their numbers. GIS is used to overlay, generate criteria maps and for visualizing ranked zones on the map. The rank of a zone so obtained is an index that guides decision-makers to the best utilization of the zone in future.
Orthogonal search-based rule extraction for modelling the decision to transfuse.
Etchells, T A; Harrison, M J
2006-04-01
Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb 13 mm and Hb 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).
An Elite Decision Making Harmony Search Algorithm for Optimization Problem
Directory of Open Access Journals (Sweden)
Lipu Zhang
2012-01-01
Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.
48 CFR 6101.27 - Relief from decision or order [Rule 27].
2010-10-01
... order [Rule 27]. (a) Grounds. The Board may relieve a party from the operation of a final decision or... discovered, even through due diligence; (2) Justifiable or excusable mistake, inadvertence, surprise, or neglect; (3) Fraud, misrepresentation, or other misconduct of an adverse party; (4) The decision has been...
Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction
Zhang, W.; Li, X.; Xiao, W.
2018-05-01
The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the
Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew
2010-01-01
the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory
Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model
Shulgin, A.
2015-01-01
Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...
A tool for study of optimal decision trees
Alkhalid, Abdulaziz
2010-01-01
The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters and the number of optimal decision trees. © 2010 Springer-Verlag Berlin Heidelberg.
Developing an optimal valve closing rule curve for real-time pressure control in pipes
Energy Technology Data Exchange (ETDEWEB)
Bazarganlari, Mohammad Reza; Afshar, Hossein [Islamic Azad University, Tehran (Iran, Islamic Republic of); Kerachian, Reza [University of Tehran, Tehran (Iran, Islamic Republic of); Bashiazghadi, Seyyed Nasser [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)
2013-01-15
Sudden valve closure in pipeline systems can cause high pressures that may lead to serious damages. Using an optimal valve closing rule can play an important role in managing extreme pressures in sudden valve closure. In this paper, an optimal closing rule curve is developed using a multi-objective optimization model and Bayesian networks (BNs) for controlling water pressure in valve closure instead of traditional step functions or single linear functions. The method of characteristics is used to simulate transient flow caused by valve closure. Non-dominated sorting genetic algorithms-II is also used to develop a Pareto front among three objectives related to maximum and minimum water pressures, and the amount of water passes through the valve during the valve-closing process. Simulation and optimization processes are usually time-consuming, thus results of the optimization model are used for training the BN. The trained BN is capable of determining optimal real-time closing rules without running costly simulation and optimization models. To demonstrate its efficiency, the proposed methodology is applied to a reservoir-pipe-valve system and the optimal closing rule curve is calculated for the valve. The results of the linear and BN-based valve closure rules show that the latter can significantly reduce the range of variations in water hammer pressures.
Barton, Michael
2016-03-14
We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.
Barton, Michael; Calo, Victor M.
2016-01-01
We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
International Nuclear Information System (INIS)
Wang, M; Hu, N Q; Qin, G J
2011-01-01
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
Energy Technology Data Exchange (ETDEWEB)
Wang, M; Hu, N Q; Qin, G J, E-mail: hnq@nudt.edu.cn, E-mail: wm198063@yahoo.com.cn [School of Mechatronic Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)
2011-07-19
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
Determining rules for closing customer service centers: A public utility company's fuzzy decision
Dekorvin, Andre; Shipley, Margaret F.; Lea, Robert N.
1992-01-01
In the present work, we consider the general problem of knowledge acquisition under uncertainty. Simply stated, the problem reduces to the following: how can we capture the knowledge of an expert when the expert is unable to clearly formulate how he or she arrives at a decision? A commonly used method is to learn by examples. We observe how the expert solves specific cases and from this infer some rules by which the decision may have been made. Unique to our work is the fuzzy set representation of the conditions or attributes upon which the expert may possibly base his fuzzy decision. From our examples, we infer certain and possible fuzzy rules for closing a customer service center and illustrate the importance of having the decision closely relate to the conditions under consideration.
Decision models for use with criterion-referenced tests
van der Linden, Willem J.
1980-01-01
The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive
Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis
Arato, Hiroki
2009-01-01
This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...
Dispositional optimism, self-framing and medical decision-making.
Zhao, Xu; Huang, Chunlei; Li, Xuesong; Zhao, Xin; Peng, Jiaxi
2015-03-01
Self-framing is an important but underinvestigated area in risk communication and behavioural decision-making, especially in medical settings. The present study aimed to investigate the relationship among dispositional optimism, self-frame and decision-making. Participants (N = 500) responded to the Life Orientation Test-Revised and self-framing test of medical decision-making problem. The participants whose scores were higher than the middle value were regarded as highly optimistic individuals. The rest were regarded as low optimistic individuals. The results showed that compared to the high dispositional optimism group, participants from the low dispositional optimism group showed a greater tendency to use negative vocabulary to construct their self-frame, and tended to choose the radiation therapy with high treatment survival rate, but low 5-year survival rate. Based on the current findings, it can be concluded that self-framing effect still exists in medical situation and individual differences in dispositional optimism can influence the processing of information in a framed decision task, as well as risky decision-making. © 2014 International Union of Psychological Science.
Totally optimal decision trees for Boolean functions
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2016-01-01
We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters
Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2017-01-01
: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees
Determining Optimal Decision Version
Directory of Open Access Journals (Sweden)
Olga Ioana Amariei
2014-06-01
Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.
Directory of Open Access Journals (Sweden)
Bima Sena Bayu Dewantara
2014-12-01
Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm
Unrealistic optimism and decision making
Directory of Open Access Journals (Sweden)
Božović Bojana
2009-01-01
Full Text Available One of the leading descriptive theories of decision-making under risk, Tversky & Kahneman's Prospect theory, reveals that normative explanation of decisionmaking, based only on principle of maximizing outcomes expected utility, is unsustainable. It also underlines the effect of alternative factors on decision-making. Framing effect relates to an influence that verbal formulation of outcomes has on choosing between certain and risky outcomes; in negative frame people tend to be risk seeking, whereas in positive frame people express risk averse tendencies. Individual decisions are not based on objective probabilities of outcomes, but on subjective probabilities that depend on outcome desirability. Unrealistically pessimistic subjects assign lower probabilities (than the group average to the desired outcomes, while unrealistically optimistic subjects assign higher probabilities (than the group average to the desired outcomes. Experiment was conducted in order to test the presumption that there's a relation between unrealistic optimism and decision-making under risk. We expected optimists to be risk seeking, and pessimist to be risk averse. We also expected such cognitive tendencies, if they should become manifest, to be framing effect resistant. Unrealistic optimism scale was applied, followed by the questionnaire composed of tasks of decision-making under risk. Results within the whole sample, and results of afterwards extracted groups of pessimists and optimists both revealed dominant risk seeking tendency that is resistant to the influence of subjective probabilities as well as to the influence of frame in which the outcome is presented.
Directory of Open Access Journals (Sweden)
Zhou Li
2012-11-01
Full Text Available Abstract Background Efficient rule authoring tools are critical to allow clinical Knowledge Engineers (KEs, Software Engineers (SEs, and Subject Matter Experts (SMEs to convert medical knowledge into machine executable clinical decision support rules. The goal of this analysis was to identify the critical success factors and challenges of a fully functioning Rule Authoring Environment (RAE in order to define requirements for a scalable, comprehensive tool to manage enterprise level rules. Methods The authors evaluated RAEs in active use across Partners Healthcare, including enterprise wide, ambulatory only, and system specific tools, with a focus on rule editors for reminder and medication rules. We conducted meetings with users of these RAEs to discuss their general experience and perceived advantages and limitations of these tools. Results While the overall rule authoring process is similar across the 10 separate RAEs, the system capabilities and architecture vary widely. Most current RAEs limit the ability of the clinical decision support (CDS interventions to be standardized, sharable, interoperable, and extensible. No existing system meets all requirements defined by knowledge management users. Conclusions A successful, scalable, integrated rule authoring environment will need to support a number of key requirements and functions in the areas of knowledge representation, metadata, terminology, authoring collaboration, user interface, integration with electronic health record (EHR systems, testing, and reporting.
Stress influences decisions to break a safety rule in a complex simulation task in females.
Starcke, Katrin; Brand, Matthias; Kluge, Annette
2016-07-01
The current study examines the effects of acutely induced laboratory stress on a complex decision-making task, the Waste Water Treatment Simulation. Participants are instructed to follow a certain decision rule according to safety guidelines. Violations of this rule are associated with potential high rewards (working faster and earning more money) but also with the risk of a catastrophe (an explosion). Stress was induced with the Trier Social Stress Test while control participants underwent a non-stress condition. In the simulation task, stressed females broke the safety rule more often than unstressed females: χ(2) (1, N=24)=10.36, pbreak the safety rule because stressed female participants focused on the potential high gains while they neglected the risk of potential negative consequences. Copyright © 2016 Elsevier B.V. All rights reserved.
On algorithm for building of optimal α-decision trees
Alkhalid, Abdulaziz
2010-01-01
The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.
Understanding Optimal Decision-Making in Wargaming
2013-10-01
beneficial outcomes from wargaming, one of which is a better understanding of the impact of decisions as a part of combat processes. However, using...under instrument flight rules ( IFR ) (Bellenkes et al., 1997; Katoh, 1997). Of note, eye-tracking technology also has been applied to investigate...Neuroscience, 7 . Skinner, A., Berka, C., Ohara-Long, L., & Sebrechts, M. (2010). Impact of Virtual En- vironment Fidelity on Behavioral and
Shockley-Zalabak, Pamela
A study of decision making processes and communication rules, in a corporate setting undergoing change as a result of organizational ineffectiveness, examined whether (1) decisions about formal communication reporting systems were linked to management assumptions about technical creativity/effectiveness, (2) assumptions about…
Van Norman, Ethan R; Christ, Theodore J
2016-10-01
Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Regan, Tracey J; Taylor, Barbara L; Thompson, Grant G; Cochrane, Jean Fitts; Ralls, Katherine; Runge, Michael C; Merrick, Richard
2013-08-01
Lack of guidance for interpreting the definitions of endangered and threatened in the U.S. Endangered Species Act (ESA) has resulted in case-by-case decision making leaving the process vulnerable to being considered arbitrary or capricious. Adopting quantitative decision rules would remedy this but requires the agency to specify the relative urgency concerning extinction events over time, cutoff risk values corresponding to different levels of protection, and the importance given to different types of listing errors. We tested the performance of 3 sets of decision rules that use alternative functions for weighting the relative urgency of future extinction events: a threshold rule set, which uses a decision rule of x% probability of extinction over y years; a concave rule set, where the relative importance of future extinction events declines exponentially over time; and a shoulder rule set that uses a sigmoid shape function, where relative importance declines slowly at first and then more rapidly. We obtained decision cutoffs by interviewing several biologists and then emulated the listing process with simulations that covered a range of extinction risks typical of ESA listing decisions. We evaluated performance of the decision rules under different data quantities and qualities on the basis of the relative importance of misclassification errors. Although there was little difference between the performance of alternative decision rules for correct listings, the distribution of misclassifications differed depending on the function used. Misclassifications for the threshold and concave listing criteria resulted in more overprotection errors, particularly as uncertainty increased, whereas errors for the shoulder listing criteria were more symmetrical. We developed and tested the framework for quantitative decision rules for listing species under the U.S. ESA. If policy values can be agreed on, use of this framework would improve the implementation of the ESA by
Azad, Mohammad
2015-10-11
Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\\\_ws\\\\_entSort, and Mult\\\\_ws\\\\_entML are good for both optimization and classification.
Azad, Mohammad; Moshkov, Mikhail
2015-01-01
Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\_ws\\_entSort, and Mult\\_ws\\_entML are good for both optimization and classification.
Proposal optimization in nuclear accident emergency decision based on IAHP
International Nuclear Information System (INIS)
Xin Jing
2007-01-01
On the basis of establishing the multi-layer structure of nuclear accident emergency decision, several decision objectives are synthetically analyzed, and an optimization model of decision proposals for nuclear accident emergency based on interval analytic hierarchy process is proposed in the paper. The model makes comparisons among several emergency decision proposals quantified, and the optimum proposal is selected out, which solved the uncertain and fuzzy decision problem of judgments by experts' experiences in nuclear accidents emergency decision. Case study shows that the optimization result is much more reasonable, objective and reliable than subjective judgments, and it could be decision references for nuclear accident emergency. (authors)
A compensatory approach to optimal selection with mastery scores
van der Linden, Willem J.; Vos, Hendrik J.
1994-01-01
This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious
Incorporation of systematic uncertainties in statistical decision rules
International Nuclear Information System (INIS)
Wichers, V.A.
1994-02-01
The influence of systematic uncertainties on statistical hypothesis testing is an underexposed subject. Systematic uncertainties cannot be incorporated in hypothesis tests, but they deteriorate the performance of these tests. A wrong treatment of systematic uncertainties in verification applications in safeguards leads to false assessment of the strength of the safeguards measure, and thus undermines the safeguards system. The effects of systematic uncertainties on decision errors in hypothesis testing are analyzed quantitatively for an example from the safeguards practice. (LEU-HEU verification of UF 6 enrichment in centrifuge enrichment plants). It is found that the only proper way to tackle systematic uncertainties is reduction to sufficiently low levels; criteria for these are proposed. Although conclusions were obtained from study of a single practical application, it is believed that they hold generally: for all sources of systematic uncertainties, all statistical decision rules, and all applications. (orig./HP)
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.
Dehghani Soufi, Mahsa; Samad-Soltani, Taha; Shams Vahdati, Samad; Rezaei-Hachesu, Peyman
2018-06-01
Fast and accurate patient triage for the response process is a critical first step in emergency situations. This process is often performed using a paper-based mode, which intensifies workload and difficulty, wastes time, and is at risk of human errors. This study aims to design and evaluate a decision support system (DSS) to determine the triage level. A combination of the Rule-Based Reasoning (RBR) and Fuzzy Logic Classifier (FLC) approaches were used to predict the triage level of patients according to the triage specialist's opinions and Emergency Severity Index (ESI) guidelines. RBR was applied for modeling the first to fourth decision points of the ESI algorithm. The data relating to vital signs were used as input variables and modeled using fuzzy logic. Narrative knowledge was converted to If-Then rules using XML. The extracted rules were then used to create the rule-based engine and predict the triage levels. Fourteen RBR and 27 fuzzy rules were extracted and used in the rule-based engine. The performance of the system was evaluated using three methods with real triage data. The accuracy of the clinical decision support systems (CDSSs; in the test data) was 99.44%. The evaluation of the error rate revealed that, when using the traditional method, 13.4% of the patients were miss-triaged, which is statically significant. The completeness of the documentation also improved from 76.72% to 98.5%. Designed system was effective in determining the triage level of patients and it proved helpful for nurses as they made decisions, generated nursing diagnoses based on triage guidelines. The hybrid approach can reduce triage misdiagnosis in a highly accurate manner and improve the triage outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.
Evolution of a flexible rule for foraging that copes with environmental variation
Institute of Scientific and Technical Information of China (English)
Andrew D.HIGGINSON; Tim W.FAWCETT; Alasdair I.HOUSTON
2015-01-01
Models of adaptive behaviour typically assume that animals behave as though they have highly complex,detailed strategies for making decisions.In reality,selection favours the optimal balance between the costs and benefits of complexity.Here we investigate this trade-off for an animal that has to decide whether or not to forage for food-and so how much energy reserves to store-depending on the food availability in its environment.We evolve a decision rule that controls the target reserve level for different ranges of food availability,but where increasing complexity is costly in that metabolic rate increases with the sensitivity of the rule.The evolved rule tends to be much less complex than the optimal strategy but performs almost as well,while being less costly to implemem.It achieves this by being highly sensitive to changing food availability at low food abundance-where it provides a close fit to the optimal strategy-but insensitive when food is plentiful.When food availability is high,the target reserve level that evolves is much higher than under the optimal strategy,which has implications for our understanding of obesity.Our work highlights the important principle of generalisability of simple decision-making mechanisms,which enables animals to respond reasonably well to conditions not directly experienced by themselves or their ancestors [Current Zoology 61 (2):303-312,2015].
Helbing, Dirk; Schönhof, Martin; Kern, Daniel
2002-06-01
The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.
Future Costs, Fixed Healthcare Budgets, and the Decision Rules of Cost-Effectiveness Analysis.
van Baal, Pieter; Meltzer, David; Brouwer, Werner
2016-02-01
Life-saving medical technologies result in additional demand for health care due to increased life expectancy. However, most economic evaluations do not include all medical costs that may result from this additional demand in health care and include only future costs of related illnesses. Although there has been much debate regarding the question to which extent future costs should be included from a societal perspective, the appropriate role of future medical costs in the widely adopted but more narrow healthcare perspective has been neglected. Using a theoretical model, we demonstrate that optimal decision rules for cost-effectiveness analyses assuming fixed healthcare budgets dictate that future costs of both related and unrelated medical care should be included. Practical relevance of including the costs of future unrelated medical care is illustrated using the example of transcatheter aortic valve implantation. Our findings suggest that guidelines should prescribe inclusion of these costs. Copyright © 2014 John Wiley & Sons, Ltd.
A tool for study of optimal decision trees
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2010-01-01
The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters
International Conference on Optimization and Decision Science
Sterle, Claudio
2017-01-01
This proceedings volume highlights the state-of-the-art knowledge related to optimization, decisions science and problem solving methods, as well as their application in industrial and territorial systems. It includes contributions tackling these themes using models and methods based on continuous and discrete optimization, network optimization, simulation and system dynamics, heuristics, metaheuristics, artificial intelligence, analytics, and also multiple-criteria decision making. The number and the increasing size of the problems arising in real life require mathematical models and solution methods adequate to their complexity. There has also been increasing research interest in Big Data and related challenges. These challenges can be recognized in many fields and systems which have a significant impact on our way of living: design, management and control of industrial production of goods and services; transportation planning and traffic management in urban and regional areas; energy production and exploit...
On optimal soft-decision demodulation. [in digital communication system
Lee, L.-N.
1976-01-01
A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.
A Real-Time Holding Decision Rule Accounting for Passenger Travel Cost
Laskaris,; Cats, O.; Jenelius, E; Viti, F
2016-01-01
Holding has been extensively investigated as a strategy to mitigate the inherently stochastic nature of public transport operations. Holding focuses on either regulating vehicle headways using a rule-based approach or minimizing passenger travel cost by employing optimization models. This paper
Mate choice when males are in patches: optimal strategies and good rules of thumb.
Hutchinson, John M C; Halupka, Konrad
2004-11-07
In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.
A programmable rules engine to provide clinical decision support using HTML forms.
Heusinkveld, J; Geissbuhler, A; Sheshelidze, D; Miller, R
1999-01-01
The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser.
Dispositional Optimism as a Correlate of Decision-Making Styles in Adolescence
Directory of Open Access Journals (Sweden)
Paola Magnano
2015-06-01
Full Text Available Despite the numerous psychological areas in which optimism has been studied, including career planning, only a small amount of research has been done to investigate the relationship between optimism and decision-making styles. Consequently, we have investigated the role of dispositional optimism as a correlate of different decision-making styles, in particular, positive for effective styles and negative for ineffective ones (doubtfulness, procrastination, and delegation. Data were gathered through questionnaires administered to 803 Italian adolescents in their last 2 years of high schools with different fields of study, each at the beginning stages of planning for their professional future. A paper questionnaire was completed containing measures of dispositional optimism and career-related decision styles, during a vocational guidance intervention conducted at school. Data were analyzed using stepwise multiple regression. Results supported the proposed model by showing optimism to be a strong correlate of decision-making styles, thereby offering important intervention guidelines aimed at modifying unrealistically negative expectations regarding their future and helping students learn adaptive decision-making skills.
Decision or norm: Judicial discretion as a treat to the rule of law
Directory of Open Access Journals (Sweden)
Avramović Dragutin
2012-01-01
Full Text Available Principle of legality and legal certainty, as key notions even of the thinnest concept of rule of law, are largely endangered in our times by widening of judicial discretion range. That trend is more and more at hand in European states as well, due to convergence of common law and civil law legal systems. Judicial decision acquires higher and higher factual importance in European legal systems, although it is generally not considered as a source of law. After analysis of standings by leading scholars of legal realism theory, the author admits that a very high level of tension frequently exists between judicial decision and legal norm. Within that conflict often and relatively easy decision succeeds to tear off by the strict letter of the law. In application of general legal rules upon concrete case, by creative adjustment of the law to life, due to necessary general and abstract character of legal norms, judge becomes more creator of law, rather than the one who applies it. The author points to danger of subjective and prejudiced attitudes of the judges, as they, due to their wide discretion, make a decision more upon their own feeling of justice, rather than upon law itself. In that way the law transforms itself in judicial decision based upon subjective understanding of justice and fairness.
The res judicata rule in jurisdictional decisions of the international Court of justice
Directory of Open Access Journals (Sweden)
Kreća Milenko
2014-01-01
Full Text Available The author discusses the effects of the res judicata rule as regards jurisdictional decisions of the International Court of Justice. He finds that there exists a special position of a judgment on preliminary objection in respect to both aspects of the res judicata rule - its binding force and finality. A perception of distinct relativity of a jurisdictional decision of the Court, expressing its interlocatory character pervades, in his opinion, the body of law regulating the Court's activity. Preliminary objections as such do not exhaust objections to the jurisdiction of the Court, as evidenced by non-preliminary objections to the jurisdiction of the Court giving rise to the application of the principle compétence de la compétence understood in the narrow sense. With regard to the binding force of a judgment on preliminary objections, it does not create legal obligations stricto sensu. The author finds that the relative character of jurisdictional decisions of the Court as compared with a judgment on the merits is justified on a number of grounds.
Directory of Open Access Journals (Sweden)
Nicoleta Meslec
Full Text Available During social interactions, groups develop collective competencies that (ideally should assist groups to outperform average standalone individual members (weak cognitive synergy or the best performing member in the group (strong cognitive synergy. In two experimental studies we manipulate the type of decision rule used in group decision-making (identify the best vs. collaborative, and the way in which the decision rules are induced (direct vs. analogical and we test the effect of these two manipulations on the emergence of strong and weak cognitive synergy. Our most important results indicate that an analogically induced decision rule (imitate-the-successful heuristic in which groups have to identify the best member and build on his/her performance (take-the-best heuristic is the most conducive for strong cognitive synergy. Our studies bring evidence for the role of analogy-making in groups as well as the role of fast-and-frugal heuristics for group decision-making.
A Simple Decision Rule for Recognition of Poly(A) Tail Signal Motifs in Human Genome
AbouEisha, Hassan M.
2015-05-12
Background is the numerous attempts were made to predict motifs in genomic sequences that correspond to poly (A) tail signals. Vast portion of this effort has been directed to a plethora of nonlinear classification methods. Even when such approaches yield good discriminant results, identifying dominant features of regulatory mechanisms nevertheless remains a challenge. In this work, we look at decision rules that may help identifying such features. Findings are we present a simple decision rule for classification of candidate poly (A) tail signal motifs in human genomic sequence obtained by evaluating features during the construction of gradient boosted trees. We found that values of a single feature based on the frequency of adenine in the genomic sequence surrounding candidate signal and the number of consecutive adenine molecules in a well-defined region immediately following the motif displays good discriminative potential in classification of poly (A) tail motifs for samples covered by the rule. Conclusions is the resulting simple rule can be used as an efficient filter in construction of more complex poly(A) tail motifs classification algorithms.
Paasche, H.; Tronicke, J.
2012-04-01
optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Directory of Open Access Journals (Sweden)
Sean A Rands
Full Text Available Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Rands, Sean A
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
Age Effects and Heuristics in Decision Making.
Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael
2012-05-01
Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects.
Investigation of effective decision criteria for multiobjective optimization in IMRT.
Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H
2011-06-01
To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D
Optimal soil venting design using Bayesian Decision analysis
Kaluarachchi, J. J.; Wijedasa, A. H.
1994-01-01
Remediation of hydrocarbon-contaminated sites can be costly and the design process becomes complex in the presence of parameter uncertainty. Classical decision theory related to remediation design requires the parameter uncertainties to be stipulated in terms of statistical estimates based on site observations. In the absence of detailed data on parameter uncertainty, classical decision theory provides little contribution in designing a risk-based optimal design strategy. Bayesian decision th...
On the complexity of decision trees, the quasi-optimizer, and the power of heuristic rules
Findler, N.V.; Leeuwen, J. van
The power of certain heuristic rules is indicated by the relative reduction in the complexity of computations carried out, due to the use of the heuristics. A concept of complexity is needed to evaluate the performance of programs as they operate with a varying set of heuristic rules in use. We
Rink, Floor; Ellemers, Naomi
In two experiments we show how teams can benefit from the presence of multiple sources of deep-level task-related diversity. We manipulated differences (vs. similarities) in task information and personal decision rules in dyads (Study 1) and three-person teams (Study 2). The results indicate that
Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties
Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.
2017-12-01
Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.
Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices
Directory of Open Access Journals (Sweden)
Andrea Baiocchi
2015-01-01
Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.
On algorithm for building of optimal α-decision trees
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2010-01-01
The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic
A case study of optimization in the decision process: Siting groundwater monitoring wells
International Nuclear Information System (INIS)
Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.
1993-12-01
Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code
Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns
Directory of Open Access Journals (Sweden)
Jinwei Gu
2014-01-01
Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.
Algorithms for optimal dyadic decision trees
Energy Technology Data Exchange (ETDEWEB)
Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory
2009-01-01
A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.
Rules of Normalisation and their Importance for Interpretation of Systems of Optimal Taxation
DEFF Research Database (Denmark)
Munk, Knud Jørgen
representation of the general equilibrium conditions the rules of normalisation in standard optimal tax models. This allows us to provide an intuitive explanation of what determines the optimal tax system. Finally, we review a number of examples where lack of precision with respect to normalisation in otherwise...... important contributions to the literature on optimal taxation has given rise to misinterpretations of of analytical results....
A two-stage stochastic rule-based model to determine pre-assembly buffer content
Gunay, Elif Elcin; Kula, Ufuk
2018-01-01
This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.
An automated approach to the design of decision tree classifiers
Argentiero, P.; Chin, R.; Beaudet, P.
1982-01-01
An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.
People adopt optimal policies in simple decision-making, after practice and guidance.
Evans, Nathan J; Brown, Scott D
2017-04-01
Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.
ERDOS 1.0. Emergency response decisions as problems of optimal stopping
International Nuclear Information System (INIS)
Pauwels, N.
1998-11-01
The ERDOS-software is a stochastic dynamic program to support the decision problem of preventively evacuating the workers of an industrial company threatened by a nuclear accident taking place in the near future with a particular probability. ERDOS treats this problem as one of optimal stopping: the governmental decision maker initially holds a call option enabling him to postpone the evacuation decision and observe the further evolution of the alarm situation. As such, he has to decide on the optimal point in time to exercise this option, i.e. to take the irreversible decision to evacuate the threatened workers. ERDOS allows to calculate the expected costs of an optimal intervention strategy and to compare this outcome with the costs resulting from a myopic evacuation decision, ignoring the prospect of more complete information at later stages of the decision process. Furthermore, ERDOS determines the free boundary, giving the critical severity as a function of time that will trigger immediate evacuation in case it is exceeded. Finally, the software provides useful insights in the financial implications of loosing time during the initial stages of the decision process (due to the gathering of information, discussions on the intervention strategy and so on)
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
International Nuclear Information System (INIS)
Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Tourassi, Georgia D
2008-01-01
This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates
Directory of Open Access Journals (Sweden)
Laboutková Šárka
2017-09-01
Full Text Available Lobbying transparency seems to have been a challenging topic for nearly a decade. For the purposes of the article, the authors focus on a contextual analysis of rules and measures that offers both a broad as well as comprehensive view of the required transparency of lobbying activities and the environment in which decisions are made. In this regard, focusing on the sunshine principles/sunshine rules (not purely limited to laws provides a grasp of the whole issue in a broader context. From a methodological point of view, the exploratory approach was chosen and the coding procedure is mostly dichotomous. As a result, seven key areas with 70 indicators have been identified in terms of transparency of lobbying and decision-making.
Identification of Optimal Preventive Maintenance Decisions for Composite Components
Laks, P.; Verhagen, W.J.C.; Gherman, B.; Porumbel, I.
2018-01-01
This research proposes a decision support tool which identifies cost-optimal maintenance decisions for a given planning period. Simultaneously, the reliability state of the component is kept at or below a given reliability threshold: a failure limit policy applies. The tool is developed to support
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
Heuristics in Managing Complex Clinical Decision Tasks in Experts' Decision Making.
Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme
2014-09-01
Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design.
The impact of chief executive officer optimism on hospital strategic decision making.
Langabeer, James R; Yao, Emery
2012-01-01
Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating
Studying Operation Rules of Cascade Reservoirs Based on Multi-Dimensional Dynamics Programming
Directory of Open Access Journals (Sweden)
Zhiqiang Jiang
2017-12-01
Full Text Available Although many optimization models and methods are applied to the optimization of reservoir operation at present, the optimal operation decision that is made through these models and methods is just a retrospective review. Due to the limitation of hydrological prediction accuracy, it is practical and feasible to obtain the suboptimal or satisfactory solution by the established operation rules in the actual reservoir operation, especially for the mid- and long-term operation. In order to obtain the optimized sample data with global optimality; and make the extracted operation rules more reasonable and reliable, this paper presents the multi-dimensional dynamic programming model of the optimal joint operation of cascade reservoirs and provides the corresponding recursive equation and the specific solving steps. Taking Li Xianjiang cascade reservoirs as a case study, seven uncertain problems in the whole operation period of the cascade reservoirs are summarized after a detailed analysis to the obtained optimal sample data, and two sub-models are put forward to solve these uncertain problems. Finally, by dividing the whole operation period into four characteristic sections, this paper extracts the operation rules of each reservoir for each section respectively. When compared the simulation results of the extracted operation rules with the conventional joint operation method; the result indicates that the power generation of the obtained rules has a certain degree of improvement both in inspection years and typical years (i.e., wet year; normal year and dry year. So, the rationality and effectiveness of the extracted operation rules are verified by the comparative analysis.
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2011-10-01
This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.
Bayesian emulation for optimization in multi-step portfolio decisions
Irie, Kaoru; West, Mike
2016-01-01
We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...
A dynamic decision model for portfolio investment and assets management
Institute of Scientific and Technical Information of China (English)
QIAN Edward Y.; FENG Ying; HIGGISION James
2005-01-01
This paper addresses a dynamic portfolio investment problem. It discusses how we can dynamically choose candidate assets, achieve the possible maximum revenue and reduce the risk to the minimum level. The paper generalizes Markowitz's portfolio selection theory and Sharpe's rule for investment decision. An analytical solution is presented to show how an institutional or individual investor can combine Markowitz's portfolio selection theory, generalized Sharpe's rule and Value-at-Risk(VaR) to find candidate assets and optimal level of position sizes for investment (dis-investment). The result shows that the generalized Markowitz's portfolio selection theory and generalized Sharpe's rule improve decision making for investment.
Logistics systems optimization under competition
DEFF Research Database (Denmark)
Choi, Tsan Ming; Govindan, Kannan; Ma, Lijun
2015-01-01
environment, decision making for all these critical areas requires more sophisticated mathematical modeling and analysis. Since finding the optimal solution of MCVRP is computationally expensive, they design a few guiding rules, which employ the searching history, to enhance the searching. They conduct...
Barelds, Ingrid; Krijnen, Wim P; van de Leur, Johannes P; van der Schans, Cees P; Goddard, Robert J
BACKGROUND: Ankle decision rules are developed to expedite patient care and reduce the number of radiographs of the ankle and foot. Currently, only three systematic reviews have been conducted on the accuracy of the Ottawa Ankle and Foot Rules (OAFR) in adults and children. However, no systematic
Confronting dynamics and uncertainty in optimal decision making for conservation
Williams, Byron K.; Johnson, Fred A.
2013-06-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a
Confronting dynamics and uncertainty in optimal decision making for conservation
Williams, Byron K.; Johnson, Fred A.
2013-01-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a
Confronting dynamics and uncertainty in optimal decision making for conservation
International Nuclear Information System (INIS)
Williams, Byron K; Johnson, Fred A
2013-01-01
The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a
International Nuclear Information System (INIS)
Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng
2015-01-01
Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to
Energy Technology Data Exchange (ETDEWEB)
Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)
2015-05-15
Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to
Gubhaju, Bina; De Jong, Gordon F
2009-03-01
This research tests the thesis that the neoclassical micro-economic and the new household economic theoretical assumptions on migration decision-making rules are segmented by gender, marital status, and time frame of intention to migrate. Comparative tests of both theories within the same study design are relatively rare. Utilizing data from the Causes of Migration in South Africa national migration survey, we analyze how individually held "own-future" versus alternative "household well-being" migration decision rules effect the intentions to migrate of male and female adults in South Africa. Results from the gender and marital status specific logistic regressions models show consistent support for the different gender-marital status decision rule thesis. Specifically, the "maximizing one's own future" neoclassical microeconomic theory proposition is more applicable for never married men and women, the "maximizing household income" proposition for married men with short-term migration intentions, and the "reduce household risk" proposition for longer time horizon migration intentions of married men and women. Results provide new evidence on the way household strategies and individual goals jointly affect intentions to move or stay.
Bereby-Meyer, Yoella; Meyer, Joachim; Budescu, David V
2003-02-01
This paper assesses framing effects on decision making with internal uncertainty, i.e., partial knowledge, by focusing on examinees' behavior in multiple-choice (MC) tests with different scoring rules. In two experiments participants answered a general-knowledge MC test that consisted of 34 solvable and 6 unsolvable items. Experiment 1 studied two scoring rules involving Positive (only gains) and Negative (only losses) scores. Although answering all items was the dominating strategy for both rules, the results revealed a greater tendency to answer under the Negative scoring rule. These results are in line with the predictions derived from Prospect Theory (PT) [Econometrica 47 (1979) 263]. The second experiment studied two scoring rules, which allowed respondents to exhibit partial knowledge. Under the Inclusion-scoring rule the respondents mark all answers that could be correct, and under the Exclusion-scoring rule they exclude all answers that might be incorrect. As predicted by PT, respondents took more risks under the Inclusion rule than under the Exclusion rule. The results illustrate that the basic process that underlies choice behavior under internal uncertainty and especially the effect of framing is similar to the process of choice under external uncertainty and can be described quite accurately by PT. Copyright 2002 Elsevier Science B.V.
A Branch-and-Price approach to find optimal decision trees
Firat, M.; Crognier, Guillaume; Gabor, Adriana; Zhang, Y.
2018-01-01
In Artificial Intelligence (AI) field, decision trees have gained certain importance due to their effectiveness in solving classification and regression problems. Recently, in the literature we see finding optimal decision trees are formulated as Mixed Integer Linear Programming (MILP) models. This
Directory of Open Access Journals (Sweden)
Bedatri Moulik
2015-08-01
Full Text Available The field of hybrid vehicles has undergone intensive research and development, primarily due to the increasing concern of depleting resources and increasing pollution. In order to investigate further options to optimize the performance of hybrid vehicles with regards to different criteria, such as fuel economy, battery aging, etc., a detailed state-of-the-art review is presented in this contribution. Different power management and optimization techniques are discussed focusing on rule-based power management and multi-objective optimization techniques. The extent of rule-based power management and optimization in solving battery aging issues is investigated along with an implementation in real-time driving scenarios where no pre-defined drive cycle is followed. The goal of this paper is to illustrate the significance and applications of rule-based power management optimization based on previous contributions.
Hurford, Anthony; Harou, Julien
2014-05-01
Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.
Optimal decisions principles of programming
Lange, Oskar
1971-01-01
Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t
Biometric image enhancement using decision rule based image fusion techniques
Sagayee, G. Mary Amirtha; Arumugam, S.
2010-02-01
Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.
Tian, Yangge; Bian, Fuling
2007-06-01
The technology of artificial intelligence should be imported on the basis of the geographic information system to bring up the spatial decision-supporting system (SDSS). The paper discusses the structure of SDSS, after comparing the characteristics of RBR and CBR, the paper brings up the frame of a spatial decisional system that combines RBR and CBR, which has combined the advantages of them both. And the paper discusses the CBR in agriculture spatial decisions, the application of ANN (Artificial Neural Network) in CBR, and enriching the inference rule base based on association rules, etc. And the paper tests and verifies the design of this system with the examples of the evaluation of the crops' adaptability.
Amsterdam wrist rules: A clinical decision aid
Directory of Open Access Journals (Sweden)
Bentohami Abdelali
2011-10-01
Full Text Available Abstract Background Acute trauma of the wrist is one of the most frequent reasons for visiting the Emergency Department. These patients are routinely referred for radiological examination. Most X-rays however, do not reveal any fractures. A clinical decision rule determining the need for X-rays in patients with acute wrist trauma may help to percolate and select patients with fractures. Methods/Design This study will be a multi-center observational diagnostic study in which the data will be collected cross-sectionally. The study population will consist of all consecutive adult patients (≥18 years presenting with acute wrist trauma at the Emergency Department in the participating hospitals. This research comprises two components: one study will be conducted to determine which clinical parameters are predictive for the presence of a distal radius fracture in adult patients presenting to the Emergency Department following acute wrist trauma. These clinical parameters are defined by trauma-mechanism, physical examination, and functional testing. This data will be collected in two of the three participating hospitals and will be assessed by using logistic regression modelling to estimate the regression coefficients after which a reduced model will be created by means of a log likelihood ratio test. The accuracy of the model will be estimated by a goodness of fit test and an ROC curve. The final model will be validated internally through bootstrapping and by shrinking it, an adjusted model will be generated. In the second component of this study, the developed prediction model will be validated in a new dataset consisting of a population of patients from the third hospital. If necessary, the model will be calibrated using the data from the validation study. Discussion Wrist trauma is frequently encountered at the Emergency Department. However, to this date, no decision rule regarding this type of trauma has been created. Ideally, radiographs are
Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.
2010-01-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…
Optimization of protection as a decision-making tool for radioactive waste disposal
International Nuclear Information System (INIS)
Bragg, K.
1988-03-01
This paper discusses whether optimization of radiation protection is a workable or helpful concept or tool with respect to decisions in the field of long-term radioactive waste management. Examples of three waste types (high-level, low-level and uranium mine tailings) are used to illustrate that actual decisions are made taking account of more complex factors and that optimization of protection plays a relatively minor role. It is thus concluded that it is not a useful general tool for waste management decision-making. Discussion of the nature of the differences between technical and non-technical factors is also presented along with suggestions to help facilitate future decision-making
Prahl, Andrew; Dexter, Franklin; Braun, Michael T; Van Swol, Lyn
2013-11-01
Because operating room (OR) management decisions with optimal choices are made with ubiquitous biases, decisions are improved with decision-support systems. We reviewed experimental social-psychology studies to explore what an OR leader can do when working with stakeholders lacking interest in learning the OR management science but expressing opinions about decisions, nonetheless. We considered shared information to include the rules-of-thumb (heuristics) that make intuitive sense and often seem "close enough" (e.g., staffing is planned based on the average workload). We considered unshared information to include the relevant mathematics (e.g., staffing calculations). Multiple studies have shown that group discussions focus more on shared than unshared information. Quality decisions are more likely when all group participants share knowledge (e.g., have taken a course in OR management science). Several biases in OR management are caused by humans' limited abilities to estimate tails of probability distributions in their heads. Groups are more susceptible to analogous biases than are educated individuals. Since optimal solutions are not demonstrable without groups sharing common language, only with education of most group members can a knowledgeable individual influence the group. The appropriate model of decision-making is autocratic, with information obtained from stakeholders. Although such decisions are good quality, the leaders often are disliked and the decisions considered unjust. In conclusion, leaders will find the most success if they do not bring OR management operational decisions to groups, but instead act autocratically while obtaining necessary information in 1:1 conversations. The only known route for the leader making such decisions to be considered likable and for the decisions to be considered fair is through colleagues and subordinates learning the management science.
Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø
2015-12-01
Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.
Decision-Aiding and Optimization for Vertical Navigation of Long-Haul Aircraft
Patrick, Nicholas J. M.; Sheridan, Thomas B.
1996-01-01
Most decisions made in the cockpit are related to safety, and have therefore been proceduralized in order to reduce risk. There are very few which are made on the basis of a value metric such as economic cost. One which can be shown to be value based, however, is the selection of a flight profile. Fuel consumption and flight time both have a substantial effect on aircraft operating cost, but they cannot be minimized simultaneously. In addition, winds, turbulence, and performance vary widely with altitude and time. These factors make it important and difficult for pilots to (a) evaluate the outcomes associated with a particular trajectory before it is flown and (b) decide among possible trajectories. The two elements of this problem considered here are: (1) determining what constitutes optimality, and (2) finding optimal trajectories. Pilots and dispatchers from major u.s. airlines were surveyed to determine which attributes of the outcome of a flight they considered the most important. Avoiding turbulence-for passenger comfort-topped the list of items which were not safety related. Pilots' decision making about the selection of flight profile on the basis of flight time, fuel burn, and exposure to turbulence was then observed. Of the several behavioral and prescriptive decision models invoked to explain the pilots' choices, utility maximization is shown to best reproduce the pilots' decisions. After considering more traditional methods for optimizing trajectories, a novel method is developed using a genetic algorithm (GA) operating on a discrete representation of the trajectory search space. The representation is a sequence of command altitudes, and was chosen to be compatible with the constraints imposed by Air Traffic Control, and with the training given to pilots. Since trajectory evaluation for the GA is performed holistically, a wide class of objective functions can be optimized easily. Also, using the GA it is possible to compare the costs associated with
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
Research on the decision-making model of land-use spatial optimization
He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu
2009-10-01
Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
Heuristic rules embedded genetic algorithm to solve VVER loading pattern optimization problem
International Nuclear Information System (INIS)
Fatih, Alim; Kostandi, Ivanov
2006-01-01
Full text: Loading Pattern (LP) optimization is one of the most important aspects of the operation of nuclear reactors. A genetic algorithm (GA) code GARCO (Genetic Algorithm Reactor Optimization Code) has been developed with embedded heuristic techniques to perform optimization calculations for in-core fuel management tasks. GARCO is a practical tool that includes a unique methodology applicable for all types of Pressurized Water Reactor (PWR) cores having different geometries with an unlimited number of FA types in the inventory. GARCO was developed by modifying the classical representation of the genotype. Both the genotype representation and the basic algorithm have been modified to incorporate the in-core fuel management heuristics rules so as to obtain the best results in a shorter time. GARCO has three modes. Mode 1 optimizes the locations of the fuel assemblies (FAs) in the nuclear reactor core, Mode 2 optimizes the placement of the burnable poisons (BPs) in a selected LP, and Mode 3 optimizes simultaneously both the LP and the BP placement in the core. This study describes the basic algorithm for Mode 1. The GARCO code is applied to the VVER-1000 reactor hexagonal geometry core in this study. The M oby-Dick i s used as reactor physics code to deplete FAs in the core. It was developed to analyze the VVER reactors by SKODA Inc. To use these rules for creating the initial population with GA operators, the worth definition application is developed. Each FA has a worth value for each location. This worth is between 0 and 1. If worth of any FA for a location is larger than 0.5, this FA in this location is a good choice. When creating the initial population of LPs, a subroutine provides a percent of individuals, which have genes with higher than the 0.5 worth. The percentage of the population to be created without using worth definition is defined in the GARCO input. And also age concept has been developed to accelerate the GA calculation process in reaching the
Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system
Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei
2017-08-01
The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.
Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis
International Nuclear Information System (INIS)
Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.
2006-01-01
As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets
International Nuclear Information System (INIS)
Slaar, Annelie; Maas, Mario; Rijn, Rick R. van; Walenkamp, Monique M.J.; Bentohami, Abdelali; Goslings, J.C.; Steyerberg, Ewout W.; Jager, L.C.; Sosef, Nico L.; Velde, Romuald van; Ultee, Jan M.; Schep, Niels W.L.
2016-01-01
In most hospitals, children with acute wrist trauma are routinely referred for radiography. To develop and validate a clinical decision rule to decide whether radiography in children with wrist trauma is required. We prospectively developed and validated a clinical decision rule in two study populations. All children who presented in the emergency department of four hospitals with pain following wrist trauma were included and evaluated for 18 clinical variables. The outcome was a wrist fracture diagnosed by plain radiography. Included in the study were 787 children. The prediction model consisted of six variables: age, swelling of the distal radius, visible deformation, distal radius tender to palpation, anatomical snuffbox tender to palpation, and painful or abnormal supination. The model showed an area under the receiver operator characteristics curve of 0.79 (95% CI: 0.76-0.83). The sensitivity and specificity were 95.9% and 37.3%, respectively. The use of this model would have resulted in a 22% absolute reduction of radiographic examinations. In a validation study, 7/170 fractures (4.1%, 95% CI: 1.7-8.3%) would have been missed using the decision model. The decision model may be a valuable tool to decide whether radiography in children after wrist trauma is required. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Slaar, Annelie; Maas, Mario; Rijn, Rick R. van [University of Amsterdam, Department of Radiology, Academic Medical Centre, Meibergdreef 9, 1105, AZ, Amsterdam (Netherlands); Walenkamp, Monique M.J.; Bentohami, Abdelali; Goslings, J.C. [University of Amsterdam, Trauma Unit, Department of Surgery, Academic Medical Centre, Amsterdam (Netherlands); Steyerberg, Ewout W. [Erasmus MC - University Medical Centre, Department of Public Health, Rotterdam (Netherlands); Jager, L.C. [University of Amsterdam, Emergency Department, Academic Medical Centre, Amsterdam (Netherlands); Sosef, Nico L. [Spaarne Hospital, Department of Surgery, Hoofddorp (Netherlands); Velde, Romuald van [Tergooi Hospitals, Department of Surgery, Hilversum (Netherlands); Ultee, Jan M. [Sint Lucas Andreas Hospital, Department of Surgery, Amsterdam (Netherlands); Schep, Niels W.L. [University of Amsterdam, Trauma Unit, Department of Surgery, Academic Medical Centre, Amsterdam (Netherlands); Maasstadziekenhuis Rotterdam, Department of Surgery, Rotterdam (Netherlands)
2016-01-15
In most hospitals, children with acute wrist trauma are routinely referred for radiography. To develop and validate a clinical decision rule to decide whether radiography in children with wrist trauma is required. We prospectively developed and validated a clinical decision rule in two study populations. All children who presented in the emergency department of four hospitals with pain following wrist trauma were included and evaluated for 18 clinical variables. The outcome was a wrist fracture diagnosed by plain radiography. Included in the study were 787 children. The prediction model consisted of six variables: age, swelling of the distal radius, visible deformation, distal radius tender to palpation, anatomical snuffbox tender to palpation, and painful or abnormal supination. The model showed an area under the receiver operator characteristics curve of 0.79 (95% CI: 0.76-0.83). The sensitivity and specificity were 95.9% and 37.3%, respectively. The use of this model would have resulted in a 22% absolute reduction of radiographic examinations. In a validation study, 7/170 fractures (4.1%, 95% CI: 1.7-8.3%) would have been missed using the decision model. The decision model may be a valuable tool to decide whether radiography in children after wrist trauma is required. (orig.)
Making the Optimal Decision in Selecting Protective Clothing
International Nuclear Information System (INIS)
Price, J. Mark
2008-01-01
Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it applies to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'
Optimization of protection as a decision-making tool, for radioactive waste disposal
International Nuclear Information System (INIS)
Bragg, K.
1988-01-01
Politically-based considerations and processes including public perception and confidence appear to be the basis for real decisions affecting waste management activities such as siting, construction, operation and monitoring. Optimization of radiation protection is not a useful general tool for waste disposal decision making. Optimization of radiation protection is essentially a technical tool which can, under appropriate circumstances, provide a clear preference among major management options. The level of discrimination will be case-specific but, in general, only fairly coarse differences can be discriminated. The preferences determined by optimization of protection tend not to be related to the final choices made for disposal of radioactive wastes. Tools such as multi-attribute analysis are very useful as they provide a convenient means to rationalize the real decisions and give them some air of technical respectability. They do not, however, provide the primary basis for the decisions. Technical experts must develop an awareness of the non-technical approach to decision making an attempt to adjust their method of analyses and their presentation of information to encourage dialogue rather than confrontation. Simple expressions of technical information will be needed and the use of analogues should prove helpful
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules
Directory of Open Access Journals (Sweden)
Rajkumar Rajavel
2015-01-01
Full Text Available Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
A cognitive decision agent architecture for optimal energy management of microgrids
International Nuclear Information System (INIS)
Velik, Rosemarie; Nicolay, Pascal
2014-01-01
Highlights: • We propose an optimization approach for energy management in microgrids. • The optimizer emulates processes involved in human decision making. • Optimization objectives are energy self-consumption and financial gain maximization. • We gain improved optimization results in significantly reduced computation time. - Abstract: Via the integration of renewable energy and storage technologies, buildings have started to change from passive (electricity) consumers to active prosumer microgrids. Along with this development come a shift from centralized to distributed production and consumption models as well as discussions about the introduction of variable demand–supply-driven grid electricity prices. Together with upcoming ICT and automation technologies, these developments open space to a wide range of novel energy management and energy trading possibilities to optimally use available energy resources. However, what is considered as an optimal energy management and trading strategy heavily depends on the individual objectives and needs of a microgrid operator. Accordingly, elaborating the most suitable strategy for each particular system configuration and operator need can become quite a complex and time-consuming task, which can massively benefit from computational support. In this article, we introduce a bio-inspired cognitive decision agent architecture for optimized, goal-specific energy management in (interconnected) microgrids, which are additionally connected to the main electricity grid. For evaluating the performance of the architecture, a number of test cases are specified targeting objectives like local photovoltaic energy consumption maximization and financial gain maximization. Obtained outcomes are compared against a modified simulating annealing optimization approach in terms of objective achievement and computational effort. Results demonstrate that the cognitive decision agent architecture yields improved optimization results in
Developing Novel Reservoir Rule Curves Using Seasonal Inflow Projections
Tseng, Hsin-yi; Tung, Ching-pin
2015-04-01
Due to significant seasonal rainfall variations, reservoirs and their flexible operational rules are indispensable to Taiwan. Furthermore, with the intensifying impacts of climate change on extreme climate, the frequency of droughts in Taiwan has been increasing in recent years. Drought is a creeping phenomenon, the slow onset character of drought makes it difficult to detect at an early stage, and causes delays on making the best decision of allocating water. For these reasons, novel reservoir rule curves using projected seasonal streamflow are proposed in this study, which can potentially reduce the adverse effects of drought. This study dedicated establishing new rule curves which consider both current available storage and anticipated monthly inflows with leading time of two months to reduce the risk of water shortage. The monthly inflows are projected based on the seasonal climate forecasts from Central Weather Bureau (CWB), which a weather generation model is used to produce daily weather data for the hydrological component of the GWLF. To incorporate future monthly inflow projections into rule curves, this study designs a decision flow index which is a linear combination of current available storage and inflow projections with leading time of 2 months. By optimizing linear relationship coefficients of decision flow index, the shape of rule curves and the percent of water supply in each zone, the best rule curves to decrease water shortage risk and impacts can be developed. The Shimen Reservoir in the northern Taiwan is used as a case study to demonstrate the proposed method. Existing rule curves (M5 curves) of Shimen Reservoir are compared with two cases of new rule curves, including hindcast simulations and historic seasonal forecasts. The results show new rule curves can decrease the total water shortage ratio, and in addition, it can also allocate shortage amount to preceding months to avoid extreme shortage events. Even though some uncertainties in
International Nuclear Information System (INIS)
Procaccia, H.; Cordier, R.; Muller, S.
1994-07-01
Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab
Optimization-based decision support systems for planning problems in processing industries
Claassen, G.D.H.
2014-01-01
Summary
Optimization-based decision support systems for planning problems in processing industries
Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in
Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.
Wunderlich, Adam; Goossens, Bart; Abbey, Craig K
2016-09-01
Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.
2010-03-01
EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT...AFIT/GCS/ENG/10-06 EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT GAME THESIS Presented...35 14: Diagram of pLoGANN’s Artificial Neural Network and
Beersma, Bianca; De Dreu, Carsten K W
2002-01-01
This study examined the interactive effects of task structure, decision rule, and social motive on small-group negotiation processes and outcomes. Three-person groups negotiated either within an asymmetrical task structure (in which a majority of group members have compatible interests) or within a
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
Antagonistic and Bargaining Games in Optimal Marketing Decisions
Lipovetsky, S.
2007-01-01
Game theory approaches to find optimal marketing decisions are considered. Antagonistic games with and without complete information, and non-antagonistic games techniques are applied to paired comparison, ranking, or rating data for a firm and its competitors in the market. Mix strategy, equilibrium in bi-matrix games, bargaining models with…
Optimal Priority Structure, Capital Structure, and Investment
Dirk Hackbarth; David C. Mauer
2012-01-01
We study the interaction between financing and investment decisions in a dynamic model, where the firm has multiple debt issues and equityholders choose the timing of investment. Jointly optimal capital and priority structures can virtually eliminate investment distortions because debt priority serves as a dynamically optimal contract. Examining the relative efficiency of priority rules observed in practice, we develop several predictions about how firms adjust their priority structure in res...
Identification of Optimal Policies in Markov Decision Processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf
Where should I send it? Optimizing the submission decision process.
Directory of Open Access Journals (Sweden)
Santiago Salinas
Full Text Available How do scientists decide where to submit manuscripts? Many factors influence this decision, including prestige, acceptance probability, turnaround time, target audience, fit, and impact factor. Here, we present a framework for evaluating where to submit a manuscript based on the theory of Markov decision processes. We derive two models, one in which an author is trying to optimally maximize citations and another in which that goal is balanced by either minimizing the number of resubmissions or the total time in review. We parameterize the models with data on acceptance probability, submission-to-decision times, and impact factors for 61 ecology journals. We find that submission sequences beginning with Ecology Letters, Ecological Monographs, or PLOS ONE could be optimal depending on the importance given to time to acceptance or number of resubmissions. This analysis provides some guidance on where to submit a manuscript given the individual-specific values assigned to these disparate objectives.
MARKET ALLOCATION RULES FOR NONPRICE PROMOTION WITH FARM PROGRAMS: U.S. COTTON
Ding, Lily; Kinnucan, Henry W.
1996-01-01
Rules are derived to indicate the optimal allocation of a fixed promotion budget between domestic and export markets when the commodity in question represents a significant portion of world trade and is protected in the domestic market by a deficiency-payment program. Optimal allocation decisions are governed by advertising elasticities in the domestic and export markets and the export market share. PromotionÂ’'s ability to lower deficiency payments is inversely related to the absolute value ...
Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte
2017-04-01
Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.
Extensions of dynamic programming as a new tool for decision tree optimization
Alkhalid, Abdulaziz
2013-01-01
The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study possibilities of sequential optimization of α-decision trees relative to different cost functions such as depth, average depth, and number of nodes. For decision trees, we analyze relationships between depth and number of misclassifications. We also discuss results of computer experiments with some datasets from UCI ML Repository. ©Springer-Verlag Berlin Heidelberg 2013.
Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.
2017-12-01
Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.
Optimization-based decision support systems for planning problems in processing industries
Claassen, G.D.H.
2014-01-01
Summary Optimization-based decision support systems for planning problems in processing industries Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in practice. The tremendous progress in hard- and software of the past decades was an important gateway for developing computerized systems that are able to support decision making on different levels within enterprises. T...
Pavement maintenance optimization model using Markov Decision Processes
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
International Nuclear Information System (INIS)
Zhang, Yue-Jun; Wang, Ao-Dong; Tan, Weiping
2015-01-01
It is an important task for China to allocate carbon emission allowance to realize its carbon reduction target and establish carbon trading market. China has designed several allocation rules within seven pilot regions. What influence those rules may cause is closely related with the enthusiasm of emission trading scheme (ETS) covered enterprises' participation in carbon market, and more importantly, with the mechanism design and sustainable development of carbon market. For this purpose, the multi-stage profit model is developed to analyze the ETS-covered enterprises' product prices and emission reduction behaviors under different allocation rules. The results show that, first, under the rules of grandfathering, self-declaration and auctioning, when deciding the optimal product price and optimal carbon emission reduction, those enterprises may focus on maximizing current stage profit; however, under the rule of benchmarking, those enterprises may care more about the impact of current decisions on the profit in next stage. Second, the optimal product price policy is positively correlated with the price of the same kind products, consumers' low-carbon awareness and government subsidy. Finally, along with the increase of carbon price, consumers' low-carbon awareness and government subsidy and the decrease of carbon emission cap, those enterprises tend to reduce carbon emissions. - Highlights: • Analyze the impact of carbon allowance allocation rules on ETS-covered enterprises. • For grandfather, self-declaration and auction, they may maximize current profits. • For benchmark, they care the effect of current decisions on the coming profits. • The optimal product price positively relates to low-carbon awareness and subsidy. • Carbon price, low-carbon awareness and subsidy rise leads their emission reduction.
Road maintenance optimization through a discrete-time semi-Markov decision process
International Nuclear Information System (INIS)
Zhang Xueqing; Gao Hui
2012-01-01
Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.
Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach
The merits of unconscious thought in rule detection.
Li, Jiansheng; Zhu, Yawen; Yang, Yang
2014-01-01
According to unconscious thought theory (UTT), unconscious thought is more adept at complex decision-making than is conscious thought. Related research has mainly focused on the complexity of decision-making tasks as determined by the amount of information provided. However, the complexity of the rules generating this information also influences decision making. Therefore, we examined whether unconscious thought facilitates the detection of rules during a complex decision-making task. Participants were presented with two types of letter strings. One type matched a grammatical rule, while the other did not. Participants were then divided into three groups according to whether they made decisions using conscious thought, unconscious thought, or immediate decision. The results demonstrated that the unconscious thought group was more accurate in identifying letter strings that conformed to the grammatical rule than were the conscious thought and immediate decision groups. Moreover, performance of the conscious thought and immediate decision groups was similar. We conclude that unconscious thought facilitates the detection of complex rules, which is consistent with UTT.
The merits of unconscious thought in rule detection.
Directory of Open Access Journals (Sweden)
Jiansheng Li
Full Text Available According to unconscious thought theory (UTT, unconscious thought is more adept at complex decision-making than is conscious thought. Related research has mainly focused on the complexity of decision-making tasks as determined by the amount of information provided. However, the complexity of the rules generating this information also influences decision making. Therefore, we examined whether unconscious thought facilitates the detection of rules during a complex decision-making task. Participants were presented with two types of letter strings. One type matched a grammatical rule, while the other did not. Participants were then divided into three groups according to whether they made decisions using conscious thought, unconscious thought, or immediate decision. The results demonstrated that the unconscious thought group was more accurate in identifying letter strings that conformed to the grammatical rule than were the conscious thought and immediate decision groups. Moreover, performance of the conscious thought and immediate decision groups was similar. We conclude that unconscious thought facilitates the detection of complex rules, which is consistent with UTT.
Second Order Optimality in Markov Decision Chains
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2017-01-01
Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf
Locally optimized separability enhancement indices for urban land cover mapping
DEFF Research Database (Denmark)
Feyisa, Gudina L.; Meilby, Henrik; Darrel Jenerette, G.
2016-01-01
data in LULC classification. To more accurately quantify landscape patterns and their changes, we applied new locally optimized separability enhancement indices and decision rules (SEI–DR approach) to address commonly observed classification accuracy problems in urban environments. We tested the SEI...
Learning decision trees with flexible constraints and objectives using integer optimization
Verwer, S.; Zhang, Y.
2017-01-01
We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets
DEFF Research Database (Denmark)
Saez Gallego, Javier
patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...
A modeling framework for optimal long-term care insurance purchase decisions in retirement planning.
Gupta, Aparna; Li, Lepeng
2004-05-01
The level of need and costs of obtaining long-term care (LTC) during retired life require that planning for it is an integral part of retirement planning. In this paper, we divide retirement planning into two phases, pre-retirement and post-retirement. On the basis of four interrelated models for health evolution, wealth evolution, LTC insurance premium and coverage, and LTC cost structure, a framework for optimal LTC insurance purchase decisions in the pre-retirement phase is developed. Optimal decisions are obtained by developing a trade-off between post-retirement LTC costs and LTC insurance premiums and coverage. Two-way branching models are used to model stochastic health events and asset returns. The resulting optimization problem is formulated as a dynamic programming problem. We compare the optimal decision under two insurance purchase scenarios: one assumes that insurance is purchased for good and other assumes it may be purchased, relinquished and re-purchased. Sensitivity analysis is performed for the retirement age.
Directory of Open Access Journals (Sweden)
Sangjun Park
2014-01-01
Full Text Available We consider a two-stage supply chain with one supplier and one retailer. The retailer sells a product to customer and the supplier provides a product in a make-to-order mode. In this case, the supplier’s decisions on service time and service level and the retailer’s decision on retail price have effects on customer demand. We develop optimization models to determine the optimal retail price, the optimal guaranteed service time, the optimal service level, and the optimal capacity to maximize the expected profit of the whole supply chain. The results of numerical experiments show that it is more profitable to determine the optimal price, the optimal guaranteed service time, and the optimal service level simultaneously and the proposed model is more profitable in service level sensitive market.
SOLVING OPTIMAL ASSEMBLY LINE CONFIGURATION TASK BY MULTIOBJECTIVE DECISION MAKING METHODS
Directory of Open Access Journals (Sweden)
Ján ČABALA
2017-06-01
Full Text Available This paper deals with looking for the optimal configuration of automated assembly line model placed within Department of Cybernetics and Artificial Intelligence (DCAI. In order to solve this problem, Stateflow model of each configuration was created to simulate the behaviour of particular assembly line configuration. Outputs from these models were used as inputs into the multiobjective decision making process. Multi-objective decision-making methods were subsequently used to find the optimal configuration of assembly line. Paper describes the whole process of solving this task, from building the models to choosing the best configuration. Specifically, the problem was resolved using the experts’ evaluation method for evaluating the weights of every decision-making criterion, while the ELECTRE III, TOPSIS and AGREPREF methods were used for ordering the possible solutions from the most to the least suitable alternative. Obtained results were compared and final solution of this multi-objective decisionmaking problem is chosen.
Beest, van F.
2012-01-01
This book examines the effect that rules-based and principles-based accounting standards have on the level and nature of earnings management decisions. A cherry picking experiment is conducted to test the hypothesis that a substitution effect is expected from accounting decisions to transaction
DECISION SUPPORT TOOL FOR RETAIL SHELF SPACE OPTIMIZATION
B. RAMASESHAN; N. R. ACHUTHAN; R. COLLINSON
2008-01-01
Efficient allocation of shelf space and product assortment can significantly improve a retailer's profitability. This paper addresses the problem from the perspective of an independent franchise retailer. A Category Management Decision Support Tool (CMDST) is proposed that efficiently generates optimal shelf space allocations and product assortments by using the existing scarce resources, resulting in increased profitability. CMDST utilizes two practical integrated category management models ...
Risk Management and Insurance Decisions under Ambiguity
DEFF Research Database (Denmark)
Martínez-Correa, Jimmy
I study the impact of ambiguity on insurance decisions and the optimality of insurance contracts. My tractable approach allows me to study the interaction between risk and ambiguity attitudes. When insurance decisions are made independently of other assets, for a given increase in wealth, both risk...... portfolio theory that assumes Subjective Expected Utility theory; however, it provides hints to a possible solution of the under-diversification puzzle of households. I also identify conditions under which more risk or ambiguity aversion decreases the demand for coinsurance. Additionally, I show...... a counterexample to a classical result in insurance economics where an insurance contract with straight deductible is dominated by a coinsurance contract. Finally, I find that a modified Borch rule characterizes the optimal insurance contract with bilateral risk and ambiguity attitudes and heterogeneity in beliefs....
Application of goal programming to decision problem on optimal allocation of radiation workers
International Nuclear Information System (INIS)
Sa, Sangduk; Narita, Masakuni
1993-01-01
This paper is concerned with an optimal planning in a multiple objective decision-making problem of allocating radiation workers to workplaces associated with occupational exposure. The model problem is formulated with the application of goal programming which effectively followed up diverse and conflicting factors influencing the optimal decision. The formulation is based on the data simulating the typical situations encountered at the operating facilities such as nuclear power plants where exposure control is critical to the management. Multiple goals set by the decision-maker/manager who has the operational responsibilities for radiological protection are illustrated in terms of work requirements, exposure constraints of the places, desired allocation of specific personnel and so on. Test results of the model are considered to indicate that the model structure and its solution process can provide the manager with a good set of analysis of his problems in implementing the optimization review of radiation protection during normal operation. (author)
Constructing an optimal decision tree for FAST corner point detection
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2011-01-01
In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.
The anatomy of choice: dopamine and decision-making.
Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J
2014-11-05
This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.
National Research Council Canada - National Science Library
Jordan, Jeremy D
2007-01-01
.... Methodology is developed that allows a decision maker to change his perceived optimal policy based on available knowledge of the opponents strategy, where the opponent is a rational decision maker...
The triangular density to approximate the normal density: decision rules-of-thumb
International Nuclear Information System (INIS)
Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.
2003-01-01
In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications
International Nuclear Information System (INIS)
Kim, Han Gon; Chang, Soon Heung; Lee, Byung
2004-01-01
The Optimal Fuel Shuffling System (OFSS) is developed for optimal design of PWR fuel loading pattern. In this paper, an optimal loading pattern is defined that the local power peaking factor is lower than predetermined value during one cycle and the effective multiplication factor is maximized in order to extract maximum energy. OFSS is a hybrid system that a rule based system, a fuzzy logic, and an artificial neural network are connected each other. The rule based system classifies loading patterns into two classes using several heuristic rules and a fuzzy rule. A fuzzy rule is introduced to achieve more effective and fast searching. Its membership function is automatically updated in accordance with the prediction results. The artificial neural network predicts core parameters for the patterns generated from the rule based system. The back-propagation network is used for fast prediction of core parameters. The artificial neural network and the fuzzy logic can be used as the tool for improvement of existing algorithm's capabilities. OFSS was demonstrated and validated for cycle 1 of Kori unit 1 PWR. (author)
Energy Technology Data Exchange (ETDEWEB)
Kim, Han Gon; Chang, Soon Heung; Lee, Byung [Department of Nuclear Engineering, Korea Advanced Institute of Science and Technology, Yusong-gu, Taejon (Korea, Republic of)
2004-07-01
The Optimal Fuel Shuffling System (OFSS) is developed for optimal design of PWR fuel loading pattern. In this paper, an optimal loading pattern is defined that the local power peaking factor is lower than predetermined value during one cycle and the effective multiplication factor is maximized in order to extract maximum energy. OFSS is a hybrid system that a rule based system, a fuzzy logic, and an artificial neural network are connected each other. The rule based system classifies loading patterns into two classes using several heuristic rules and a fuzzy rule. A fuzzy rule is introduced to achieve more effective and fast searching. Its membership function is automatically updated in accordance with the prediction results. The artificial neural network predicts core parameters for the patterns generated from the rule based system. The back-propagation network is used for fast prediction of core parameters. The artificial neural network and the fuzzy logic can be used as the tool for improvement of existing algorithm's capabilities. OFSS was demonstrated and validated for cycle 1 of Kori unit 1 PWR. (author)
Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes
Sheer, D. P.
2008-12-01
For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints
Sensitivity study on heuristic rules applied to the neutronic optimization of cells for BWR
International Nuclear Information System (INIS)
Gonzalez C, J.; Martin del Campo M, C.; Francois L, J.L.
2004-01-01
The objective of this work is to verify the validity of the heuristic rules that have been applied in the processes of radial optimization of fuel cells. It was examined the rule with respect to the accommodation of fuel in the corners of the cell and it became special attention on the influence of the position and concentration of those pellets with gadolinium in the reactivity of the cell and the safety parameters. The evaluation behaved on designed cells violating the heuristic rules. For both cases the cells were analyzed between infinite using the HELIOS code. Additionally, for the second case, it was behaved a stage more exhaustive where it was used one of the studied cells that it completed those safety parameters and of reactivity to generate the design of an assemble that was used to calculate with CM-PRESTO the behavior of the nucleus during three operation cycles. (Author)
A C++ Class for Rule-Base Objects
Directory of Open Access Journals (Sweden)
William J. Grenney
1992-01-01
Full Text Available A C++ class, called Tripod, was created as a tool to assist with the development of rule-base decision support systems. The Tripod class contains data structures for the rule-base and member functions for operating on the data. The rule-base is defined by three ASCII files. These files are translated by a preprocessor into a single file that is located when a rule-base object is instantiated. The Tripod class was tested as part of a proto-type decision support system (DSS for winter highway maintenance in the Intermountain West. The DSS is composed of two principal modules: the main program, called the wrapper, and a Tripod rule-base object. The wrapper is a procedural module that interfaces with remote sensors and an external meterological database. The rule-base contains the logic for advising an inexperienced user and for assisting with the decision making process.
A cognitive prosthesis for complex decision-making.
Tremblay, Sébastien; Gagnon, Jean-François; Lafond, Daniel; Hodgetts, Helen M; Doiron, Maxime; Jeuniaux, Patrick P J M H
2017-01-01
While simple heuristics can be ecologically rational and effective in naturalistic decision making contexts, complex situations require analytical decision making strategies, hypothesis-testing and learning. Sub-optimal decision strategies - using simplified as opposed to analytic decision rules - have been reported in domains such as healthcare, military operational planning, and government policy making. We investigate the potential of a computational toolkit called "IMAGE" to improve decision-making by developing structural knowledge and increasing understanding of complex situations. IMAGE is tested within the context of a complex military convoy management task through (a) interactive simulations, and (b) visualization and knowledge representation capabilities. We assess the usefulness of two versions of IMAGE (desktop and immersive) compared to a baseline. Results suggest that the prosthesis helped analysts in making better decisions, but failed to increase their structural knowledge about the situation once the cognitive prosthesis is removed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Butt, Muhammad Arif; Akram, Muhammad
2016-01-01
We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.
Induction and pruning of classification rules for prediction of microseismic hazards in coal mines
Energy Technology Data Exchange (ETDEWEB)
Sikora, M. [Silesian Technical University, Gliwice (Poland)
2011-06-15
The paper presents results of application of a rule induction and pruning algorithm for classification of a microseismic hazard state in coal mines. Due to imbalanced distribution of examples describing states 'hazardous' and 'safe', the special algorithm was used for induction and rule pruning. The algorithm selects optimal parameters' values influencing rule induction and pruning based on training and tuning sets. A rule quality measure which decides about a form and classification abilities of rules that are induced is the basic parameter of the algorithm. The specificity and sensitivity of a classifier were used to evaluate its quality. Conducted tests show that the admitted method of rules induction and classifier's quality evaluation enables to get better results of classification of microseismic hazards than by methods currently used in mining practice. Results obtained by the rules-based classifier were also compared with results got by a decision tree induction algorithm and by a neuro-fuzzy system.
International Nuclear Information System (INIS)
Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza
2011-01-01
Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.
Verhaert, Dominique V M; Bonnes, Judith L; Nas, Joris; Keuper, Wessel; van Grunsven, Pierre M; Smeets, Joep L R M; de Boer, Menko Jan; Brouwer, Marc A
2016-03-01
Of the proposed algorithms that provide guidance for in-field termination of resuscitation (TOR) decisions, the guidelines for cardiopulmonary resuscitation (CPR) refer to the basic and advanced life support (ALS)-TOR rules. To assess the potential consequences of implementation of the ALS-TOR rule, we performed a case-by-case evaluation of our in-field termination decisions and assessed the corresponding recommendations of the ALS-TOR rule. Cohort of non-traumatic out-of-hospital cardiac arrest (OHCA)-patients who were resuscitated by the ALS-practising emergency medical service (EMS) in the Nijmegen area (2008-2011). The ALS-TOR rule recommends termination in case all following criteria are met: unwitnessed arrest, no bystander CPR, no shock delivery, no return of spontaneous circulation (ROSC). Of the 598 cases reviewed, resuscitative efforts were terminated in the field in 46% and 15% survived to discharge. The ALS-TOR rule would have recommended in-field termination in only 6% of patients, due to high percentages of witnessed arrests (73%) and bystander CPR (54%). In current practice, absence of ROSC was the most important determinant of termination [aOR 35.6 (95% CI 18.3-69.3)]. Weaker associations were found for: unwitnessed and non-public arrests, non-shockable initial rhythms and longer EMS-response times. While designed to optimise hospital transportations, application of the ALS-TOR rule would almost double our hospital transportation rate to over 90% of OHCA-cases due to the favourable arrest circumstances in our region. Prior to implementation of the ALS-TOR rule, local evaluation of the potential consequences for the efficiency of triage is to be recommended and initiatives to improve field-triage for ALS-based EMS-systems are eagerly awaited. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Generalized concavity in fuzzy optimization and decision analysis
Ramík, Jaroslav
2002-01-01
Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...
Greedy heuristics for minimization of number of terminal nodes in decision trees
Hussain, Shahid
2014-10-01
This paper describes, in detail, several greedy heuristics for construction of decision trees. We study the number of terminal nodes of decision trees, which is closely related with the cardinality of the set of rules corresponding to the tree. We compare these heuristics empirically for two different types of datasets (datasets acquired from UCI ML Repository and randomly generated data) as well as compare with the optimal results obtained using dynamic programming method.
Greedy heuristics for minimization of number of terminal nodes in decision trees
Hussain, Shahid
2014-01-01
This paper describes, in detail, several greedy heuristics for construction of decision trees. We study the number of terminal nodes of decision trees, which is closely related with the cardinality of the set of rules corresponding to the tree. We compare these heuristics empirically for two different types of datasets (datasets acquired from UCI ML Repository and randomly generated data) as well as compare with the optimal results obtained using dynamic programming method.
Directory of Open Access Journals (Sweden)
L. Saberi
2016-10-01
Full Text Available Introduction: Increasing demand for water, depletion of resources of acceptable quality, and excessive water pollution due to agricultural and industrial developments has caused intensive social and environmental problems all over the world. Given the environmental importance of rivers, complexity and extent of pollution factors and physical, chemical and biological processes in these systems, optimal waste-load allocation in river systems has been given considerable attention in the literature in the past decades. The overall objective of planning and quality management of river systems is to develop and implement a coordinated set of strategies and policies to reduce or allocate of pollution entering the rivers so that the water quality matches by proposing environmental standards with an acceptable reliability. In such matters, often there are several different decision makers with different utilities which lead to conflicts. Methods/Materials: In this research, a conflict resolution framework for optimal waste load allocation in river systems is proposed, considering the total treatment cost and the Biological Oxygen Demand (BOD violation characteristics. There are two decision-makers inclusive waste load discharges coalition and environmentalists who have conflicting objectives. This framework consists of an embedded river water quality simulator, which simulates the transport process including reaction kinetics. The trade-off curve between objectives is obtained using the Multi-objective Particle Swarm Optimization Algorithm which these objectives are minimization of the total cost of treatment and penalties that must be paid by discharges and a violation of water quality standards considering BOD parameter which is controlled by environmentalists. Thus, the basic policy of river’s water quality management is formulated in such a way that the decision-makers are ensured their benefits will be provided as far as possible. By using MOPSO
Online Rule Generation Software Process Model
Sudeep Marwaha; Alka Aroa; Satma M C; Rajni Jain; R C Goyal
2013-01-01
For production systems like expert systems, a rule generation software can facilitate the faster deployment. The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation. The Royce’s final waterfall model has been used in this paper to explain the software development process. The paper presents the specific output of various steps of modified wat...
Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.
Ell, Shawn W
2013-12-01
The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.
Optimization of sequential decisions by least squares Monte Carlo method
DEFF Research Database (Denmark)
Nishijima, Kazuyoshi; Anders, Annett
change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...
Optimal Rules of Negligent Misrepresentation in Insurance Law
DEFF Research Database (Denmark)
Lando, Henrik
This article analyzes rules for negligent misrepresentation in insurance contract law. Before contract signature, the applicant can be asked by the insurer to fill in a questionnaire concerning the risk, and may then omit or make untrue statements about facts. Such misrepresentation is considered...... negligent by the court when it is unclear the misrepresentation was due to a mistake or intentional. Rules of negligent misrepresentation differ significantly across jurisdictions. For example, the rule of common law allows the insurer to rescind the contract, whereas the German rule does not allow...... of these rules through an analysis of the degree to which the insured should be allowed to lower coverage in case of negligent misrepresentation. On the one hand, a strict rule renders it easier for an insurer to separate different types of risk without having to use other costly means of separation...
DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.
DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846
Integrated decision making for the optimal bioethanol supply chain
International Nuclear Information System (INIS)
Corsano, Gabriela; Fumero, Yanina; Montagna, Jorge M.
2014-01-01
Highlights: • Optimal allocation, design and production planning of integrated ethanol plants is considered. • Mixed Integer Programming model is presented for solving the integration problem. • Different tradeoffs can be assessed and analyzed. • The modeling framework represents an useful tool for guiding decision making. - Abstract: Bioethanol production poses different challenges that require an integrated approach. Usually previous works have focused on specific perspectives of the global problem. On the contrary, bioethanol, in particular, and biofuels, in general, requires an integrated decision making framework that takes into account the needs and concerns of the different members involved in its supply chain. In this work, a Mixed Integer Linear Programming (MILP) model for the optimal allocation, design and production planning of integrated ethanol/yeast plants is considered. The proposed formulation addresses the relations between different aspects of the bioethanol supply chain and provides an efficient tool to assess the global operation of the supply chain taking into account different points of view. The model proposed in this work simultaneously determines the structure of a three-echelon supply chain (raw material sites, production facilities and customer zones), the design of each installed plant and operational considerations through production campaigns. Yeast production is considered in order to reduce the negative environmental impact caused by bioethanol residues. Several cases are presented in order to assess the approach capabilities and to evaluate the tradeoffs among all the decisions
Danilova, Olga; Semenova, Zinaida
2018-04-01
The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.
Decisive Visual Saliency and Consumers' In-store Decisions
DEFF Research Database (Denmark)
Clement, Jesper; Aastrup, Jesper; Forsberg, Signe Charlotte
2015-01-01
This paper focuses on consumers' in-store visual tactics and decision-making. It has been argued that many consumers shop by routine or by simple rules and justification techniques when they purchase daily commodities. It has also been argued that they make a majority of decisions in the shop......, and that they are affected by the visual stimuli in the store. The objective for this paper is to investigate the visual saliency from two factors: 1) in-store signage and 2) placement of products. This is done by a triangulation method where we utilize data from an eye-track study and sales data from grocery stores....... The first study takes place in laboratory settings with a simulated purchase situation, and the second research design builds on manipulated in-store settings and data from real purchases. We found optimal placement of two comparable goods (branded good and private label) to increase visual attention...
Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.
Liu, Siqi; Wright, Adam; Hauskrecht, Milos
2017-06-01
A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.
Mellers, B A; Schwartz, A; Cooke, A D
1998-01-01
For many decades, research in judgment and decision making has examined behavioral violations of rational choice theory. In that framework, rationality is expressed as a single correct decision shared by experimenters and subjects that satisfies internal coherence within a set of preferences and beliefs. Outside of psychology, social scientists are now debating the need to modify rational choice theory with behavioral assumptions. Within psychology, researchers are debating assumptions about errors for many different definitions of rationality. Alternative frameworks are being proposed. These frameworks view decisions as more reasonable and adaptive that previously thought. For example, "rule following." Rule following, which occurs when a rule or norm is applied to a situation, often minimizes effort and provides satisfying solutions that are "good enough," though not necessarily the best. When rules are ambiguous, people look for reasons to guide their decisions. They may also let their emotions take charge. This chapter presents recent research on judgment and decision making from traditional and alternative frameworks.
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.
2009-01-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…
Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J
The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct
Do the right thing: the assumption of optimality in lay decision theory and causal judgment.
Johnson, Samuel G B; Rips, Lance J
2015-03-01
Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent's perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent's actual and counterfactual choices (Experiments 1-3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5-6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take. Copyright © 2015 Elsevier Inc. All rights reserved.
Decision Support Model for Optimal Management of Coastal Gate
Ditthakit, Pakorn; Chittaladakorn, Suwatana
2010-05-01
The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D
2017-03-01
How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.
Interaction rules underlying group decisions in homing pigeons
Pettit, Benjamin; Perna, Andrea; Biro, Dora; Sumpter, David J. T.
2013-01-01
Travelling in groups gives animals opportunities to share route information by following cues from each other's movement. The outcome of group navigation will depend on how individuals respond to each other within a flock, school, swarm or herd. Despite the abundance of modelling studies, only recently have researchers developed techniques to determine the interaction rules among real animals. Here, we use high-resolution GPS (global positioning system) tracking to study these interactions in pairs of pigeons flying home from a familiar site. Momentary changes in velocity indicate alignment with the neighbour's direction, as well as attraction or avoidance depending on distance. Responses were stronger when the neighbour was in front. From the flocking behaviour, we develop a model to predict features of group navigation. Specifically, we show that the interactions between pigeons stabilize a side-by-side configuration, promoting bidirectional information transfer and reducing the risk of separation. However, if one bird gets in front it will lead directional choices. Our model further predicts, and observations confirm, that a faster bird (as measured from solo flights) will fly slightly in front and thus dominate the choice of homing route. Our results explain how group decisions emerge from individual differences in homing flight behaviour. PMID:24068173
Humans Optimize Decision-Making by Delaying Decision Onset
Teichert, Tobias; Ferrera, Vincent P.; Grinband, Jack
2014-01-01
Why do humans make errors on seemingly trivial perceptual decisions? It has been shown that such errors occur in part because the decision process (evidence accumulation) is initiated before selective attention has isolated the relevant sensory information from salient distractors. Nevertheless, it is typically assumed that subjects increase accuracy by prolonging the decision process rather than delaying decision onset. To date it has not been tested whether humans can strategically delay decision onset to increase response accuracy. To address this question we measured the time course of selective attention in a motion interference task using a novel variant of the response signal paradigm. Based on these measurements we estimated time-dependent drift rate and showed that subjects should in principle be able trade speed for accuracy very effectively by delaying decision onset. Using the time-dependent estimate of drift rate we show that subjects indeed delay decision onset in addition to raising response threshold when asked to stress accuracy over speed in a free reaction version of the same motion-interference task. These findings show that decision onset is a critical aspect of the decision process that can be adjusted to effectively improve decision accuracy. PMID:24599295
Directory of Open Access Journals (Sweden)
Gonggui Chen
2017-01-01
Full Text Available The optimal power flow (OPF is well-known as a significant optimization tool for the security and economic operation of power system, and OPF problem is a complex nonlinear, nondifferentiable programming problem. Thus this paper proposes a Gbest-guided cuckoo search algorithm with the feedback control strategy and constraint domination rule which is named as FCGCS algorithm for solving OPF problem and getting optimal solution. This FCGCS algorithm is guided by the global best solution for strengthening exploitation ability. Feedback control strategy is devised to dynamically regulate the control parameters according to actual and specific feedback value in the simulation process. And the constraint domination rule can efficiently handle inequality constraints on state variables, which is superior to traditional penalty function method. The performance of FCGCS algorithm is tested and validated on the IEEE 30-bus and IEEE 57-bus example systems, and simulation results are compared with different methods obtained from other literatures recently. The comparison results indicate that FCGCS algorithm can provide high-quality feasible solutions for different OPF problems.
Decision Support System for Optimized Herbicide Dose in Spring Barley
DEFF Research Database (Denmark)
Sønderskov, Mette; Kudsk, Per; Mathiassen, Solvejg K
2014-01-01
Crop Protection Online (CPO) is a decision support system, which integrates decision algorithms quantifying the requirement for weed control and a herbicide dose model. CPO was designed to be used by advisors and farmers to optimize the choice of herbicide and dose. The recommendations from CPO...... as the Treatment Frequency Index (TFI)) compared to a high level of required weed control. The observations indicated that the current level of weed control required is robust for a range of weed scenarios. Weed plant numbers 3 wk after spraying indicated that the growth of the weed species were inhibited...
Viewpoint: Decision-making in committees
Li Hao; Wing Suen
2009-01-01
This article reviews recent developments in the theory of committee decision-making. A committee consists of self-interested members who make a public decision by aggregating imperfect information dispersed among them according to a pre-specified decision rule. We focus on costly information acquisition, strategic information aggregation, and rules and processes that enhance the quality of the committee decision. Seeming inefficiencies of the committee decision-making process such as over-cau...
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
Optimization of decision making to avoid stochastically predicted air traffic conflicts
Directory of Open Access Journals (Sweden)
В.М. Васильєв
2005-01-01
Full Text Available The method of decision-making optimization on planning an aircraft trajectory to avoid potential conflict with restricted minimal level of separation standard is proposed. Evaluation and monitoring the conflict probability are made using the probabilistic composite method.
Directory of Open Access Journals (Sweden)
Helena Gaspars-Wieloch
2017-01-01
Full Text Available This paper is concerned with games against nature and multi-criteria decision making under uncertainty along with scenario planning. We focus on decision problems where a deterministic evaluation of criteria is not possible. The procedure we propose is based on weighted goal programming and may be applied when seeking a mixed strategy. A mixed strategy allows the decision maker to select and perform a weighted combination of several accessible alternatives. The new method takes into consideration the decision maker’s preference structure (importance of particular goals and nature (pessimistic, moderate or optimistic attitude towards a given problem. It is designed for one-shot decisions made under uncertainty with unknown probabilities (frequencies, i.e. for decision making under complete uncertainty or decision making under strategic uncertainty. The procedure refers to one-stage models, i.e. models considering combinations of scenarios and criteria (scenario-criterion pairs as distinct meta-attributes, which means that the novel approach can be used in the case of totally independent payoff matrices for particular targets. The algorithm does not require any information about frequencies, which is especially desirable for new decision problems. It can be successfully applied by passive decision makers, as only criteria weights and the coefficient of optimism have to be declared.
Phonological reduplication in sign language: rules rule
Directory of Open Access Journals (Sweden)
Iris eBerent
2014-06-01
Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning
Basdekas, L.; Stewart, N.; Triana, E.
2013-12-01
Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU
Health Cost Risk and Optimal Retirement Provision : A Simple Rule for Annuity Demand
Peijnenburg, J.M.J.; Nijman, T.E.; Werker, B.J.M.
2010-01-01
We analyze the effect of health cost risk on optimal annuity demand and consumption/savings decisions. Many retirees are exposed to sizeable out-of-pocket medical expenses, while annuities potentially impair the ability to get liquidity to cover these costs and smooth consumption. We find that if
Mokeddem, Diab; Khellaf, Abdelhafid
2009-01-01
Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.
Qaradaghi, Mohammed
Complexity of the capital intensive oil and gas portfolio investments is continuously growing. It is manifested in the constant increase in the type, number and degree of risks and uncertainties, which consequently lead to more challenging decision making problems. A typical complex decision making problem in petroleum exploration and production (E&P) is the selection and prioritization of oilfields/projects in a portfolio investment. Prioritizing oilfields maybe required for different purposes, including the achievement of a targeted production and allocation of limited available development resources. These resources cannot be distributed evenly nor can they be allocated based on the oilfield size or production capacity alone since various other factors need to be considered simultaneously. These factors may include subsurface complexity, size of reservoir, plateau production and needed infrastructure in addition to other issues of strategic concern, such as socio-economic, environmental and fiscal policies, particularly when the decision making involves governments or national oil companies. Therefore, it would be imperative to employ decision aiding tools that not only address these factors, but also incorporate the decision makers' preferences clearly and accurately. However, the tools commonly used in project portfolio selection and optimization, including intuitive approaches, vary in their focus and strength in addressing the different criteria involved in such decision problems. They are also disadvantaged by a number of drawbacks, which may include lacking the capacity to address multiple and interrelated criteria, uncertainty and risk, project relationship with regard to value contribution and optimum resource utilization, non-monetary attributes, decision maker's knowledge and expertise, in addition to varying levels of ease of use and other practical and theoretical drawbacks. These drawbacks have motivated researchers to investigate other tools and
A diagnosis-based clinical decision rule for spinal pain part 2: review of the literature
Directory of Open Access Journals (Sweden)
Hurwitz Eric L
2008-08-01
Full Text Available Abstract Background Spinal pain is a common and often disabling problem. The research on various treatments for spinal pain has, for the most part, suggested that while several interventions have demonstrated mild to moderate short-term benefit, no single treatment has a major impact on either pain or disability. There is great need for more accurate diagnosis in patients with spinal pain. In a previous paper, the theoretical model of a diagnosis-based clinical decision rule was presented. The approach is designed to provide the clinician with a strategy for arriving at a specific working diagnosis from which treatment decisions can be made. It is based on three questions of diagnosis. In the current paper, the literature on the reliability and validity of the assessment procedures that are included in the diagnosis-based clinical decision rule is presented. Methods The databases of Medline, Cinahl, Embase and MANTIS were searched for studies that evaluated the reliability and validity of clinic-based diagnostic procedures for patients with spinal pain that have relevance for questions 2 (which investigates characteristics of the pain source and 3 (which investigates perpetuating factors of the pain experience. In addition, the reference list of identified papers and authors' libraries were searched. Results A total of 1769 articles were retrieved, of which 138 were deemed relevant. Fifty-one studies related to reliability and 76 related to validity. One study evaluated both reliability and validity. Conclusion Regarding some aspects of the DBCDR, there are a number of studies that allow the clinician to have a reasonable degree of confidence in his or her findings. This is particularly true for centralization signs, neurodynamic signs and psychological perpetuating factors. There are other aspects of the DBCDR in which a lesser degree of confidence is warranted, and in which further research is needed.
Directory of Open Access Journals (Sweden)
Oforegbunam Thaddeus Ebiringa
2011-03-01
Full Text Available Traditional MIS has been made more effective through the integration of organization, human andtechnology factors into a decision matrix. The study is motivated by the need to find an optimal mixof interactive factors that will optimize the result of decision to apply ICT to manufacturingprocesses. The study used Factor analysis model based on the sampled opinion of forty (40operations/production managers and two thousand (2000 production line workers of three leadingmanufacturing firms: Uniliver Plc., PZ Plc, and Nigerian Breweries Plc operating in Aba IndustrialEstate of Nigeria. The results shows that a progressive mixed factor loading matrix, based on thepreferred ordered importance of resources factors in the formulation, implementation, monitoring,control and evaluation of ICT projects of the selected firms led to an average capability improvementof 0.764 in decision efficiency. This is considered strategic for achieving balanced corporate growthand development.
Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making
Directory of Open Access Journals (Sweden)
Ahmad Mortazavi
2014-01-01
Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.
Verhaert, D.V.; Bonnes, J.L.; Nas, J.; Keuper, W.; Grunsven, P.M. van; Smeets, J.L.; Boer, M.J. de; Brouwer, M.A.
2016-01-01
BACKGROUND: Of the proposed algorithms that provide guidance for in-field termination of resuscitation (TOR) decisions, the guidelines for cardiopulmonary resuscitation (CPR) refer to the basic and advanced life support (ALS)-TOR rules. To assess the potential consequences of implementation of the
Directory of Open Access Journals (Sweden)
Nguyen Tien Huy
Full Text Available BACKGROUND AND PURPOSE: Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours. METHODS: A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC using the method of DeLong and McNemar test for specificity comparison. RESULTS: Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85-90%. CONCLUSIONS: No clinical decision rules provided an acceptable specificity (>50% with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
Doubly Robust Estimation of Optimal Dynamic Treatment Regimes
DEFF Research Database (Denmark)
Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne
2014-01-01
We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...
Pulido-Velazquez, Manuel; Macian-Sorribes, Hector; María Benlliure-Moreno, Jose; Fullana-Montoro, Juan
2015-04-01
Water resources systems in areas with a strong tradition in water use are complex to manage by the high amount of constraints that overlap in time and space, creating a complicated framework in which past, present and future collide between them. In addition, it is usual to find "hidden constraints" in system operations, which condition operation decisions being unnoticed by anyone but the river managers and users. Being aware of those hidden constraints requires usually years of experience and a degree of involvement in that system's management operations normally beyond the possibilities of technicians. However, their impact in the management decisions is strongly imprinted in the historical data records available. The purpose of this contribution is to present a methodology capable of assessing operating rules in complex water resources systems combining historical records and expert criteria. Both sources are coupled using fuzzy logic. The procedure stages are: 1) organize expert-technicians preliminary meetings to let the first explain how they manage the system; 2) set up a fuzzy rule-based system (FRB) structure according to the way the system is managed; 3) use the historical records available to estimate the inputs' fuzzy numbers, to assign preliminary output values to the FRB rules and to train and validate these rules; 4) organize expert-technician meetings to discuss the rule structure and the input's quantification, returning if required to the second stage; 5) once the FRB structure is accepted, its output values must be refined and completed with the aid of the experts by using meetings, workshops or surveys; 6) combine the FRB with a Decision Support System (DSS) to simulate the effect of those management decisions; 7) compare its results with the ones offered by the historical records and/or simulation or optimization models; and 8) discuss with the stakeholders the model performance returning, if it's required, to the fifth or the second stage
Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan
2016-01-01
The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...
Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev
2016-01-01
The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
The Application of Time-Delay Dependent H∞ Control Model in Manufacturing Decision Optimization
Directory of Open Access Journals (Sweden)
Haifeng Guo
2015-01-01
Full Text Available This paper uses a time-delay dependent H∞ control model to analyze the effect of manufacturing decisions on the process of transmission from resources to capability. We establish a theoretical framework of manufacturing management process based on three terms: resource, manufacturing decision, and capability. Then we build a time-delay H∞ robust control model to analyze the robustness of manufacturing management. With the state feedback controller between manufacturing resources and decision, we find that there is an optimal decision to adjust the process of transmission from resources to capability under uncertain environment. Finally, we provide an example to prove the robustness of this model.
The rule of nuclear power in the base-load portfolio optimization process
International Nuclear Information System (INIS)
Desiata, L.; D'Alberti, F.
2007-01-01
The pursuit of optimal portfolios, maximizing long-term profitability, is the main strategic challenge faced by electricity producers nowadays. Investment decisions, worth billions of euros, are affected by spot factors (such as current fuel prices volatility) that often lead to unbalanced generation mixes. Our analysis presents a statistical-financial approach that highlights the role of nuclear within the base-load portfolio optimisation process [it
AngelStow: A Commercial Optimization-Based Decision Support Tool for Stowage Planning
DEFF Research Database (Denmark)
Delgado-Ortegon, Alberto; Jensen, Rune Møller; Guilbert, Nicolas
save port fees, optimize use of vessel capacity, and reduce bunker consumption. Stowage Coordinators (SCs) produce these plans manually with the help of graphical tools, but high-quality SPs are hard to generate with the limited support they provide. In this abstract, we introduce AngelStow which...... is a commercial optimization-based decision support tool for stowing container vessels developed in collaboration between Ange Optimization and The IT University of Copenhagen. The tool assists SCs in the process of generating SPs interactively, focusing on satisfying and optimizing constraints and objectives...... that are tedious to deal with for humans, while letting the SCs use their expertise to deal with hard combinatorial objectives and corner cases....
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
Decision Making with Imperfect Decision Makers
Guy, Tatiana Valentine; Wolpert, David H
2012-01-01
Prescriptive Bayesian decision making has reached a high level of maturity and is well-supported algorithmically. However, experimental data shows that real decision makers choose such Bayes-optimal decisions surprisingly infrequently, often making decisions that are badly sub-optimal. So prevalent is such imperfect decision-making that it should be accepted as an inherent feature of real decision makers living within interacting societies. To date such societies have been investigated from an economic and gametheoretic perspective, and even to a degree from a physics perspective. However, lit
Decision fusion recognition based on modified evidence rule
Institute of Scientific and Technical Information of China (English)
黎湘; 刘永祥; 付耀文; 庄钊文
2001-01-01
A modified evidence combination rule with a combination parameter λ is proposed to solve some problems in D-S theory by considering the correlation and complement among the evidences as well as the size and intersection of subsets in evidence. It can get reasonable results even the evidences are conflicting. Applying this rule to the real infrared/millimetre wave fusion system, a satisfactory result has been obtained.
39 CFR 3001.39 - Intermediate decisions.
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Intermediate decisions. 3001.39 Section 3001.39 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL RULES OF PRACTICE AND PROCEDURE Rules of General Applicability § 3001.39 Intermediate decisions. (a) Initial decision by presiding officer. In any proceedings in...
International Nuclear Information System (INIS)
Botin, Jose A; Guzman, Ronald R; Smith, Martin L
2011-01-01
Identifying, quantifying, and minimizing technical risks associated with investment decisions is a key challenge for mineral industry decision makers and investors. However, risk analysis in most bankable mine feasibility studies are based on the stochastic modeling of project N et Present Value (NPV)which, in most cases, fails to provide decision makers with a truly comprehensive analysis of risks associated with technical and management uncertainty and, as a result, are of little use for risk management and project optimization. This paper presents a value-chain risk management approach where project risk is evaluated for each step of the project life cycle, from exploration to mine closure, and risk management is performed as a part of a stepwise value-added optimization process.
Evaluation of the need for stochastic optimization of out-of-core nuclear fuel management decisions
International Nuclear Information System (INIS)
Thomas, R.L. Jr.
1989-01-01
Work has been completed on utilizing mathematical optimization techniques to optimize out-of-core nuclear fuel management decisions. The objective of such optimization is to minimize the levelized fuel cycle cost over some planning horizon. Typical decision variables include feed enrichments and number of assemblies, burnable poison requirements, and burned fuel to reinsert for every cycle in the planning horizon. Engineering constraints imposed consist of such items as discharge burnup limits, maximum enrichment limit, and target cycle energy productions. Earlier the authors reported on the development of the OCEON code, which employs the integer Monte Carlo Programming method as the mathematical optimization method. The discharge burnpups, and feed enrichment and burnable poison requirements are evaluated, initially employing a linear reactivity core physics model and refined using a coarse mesh nodal model. The economic evaluation is completed using a modification of the CINCAS methodology. Interest now is to assess the need for stochastic optimization, which will account for cost components and cycle energy production uncertainties. The implication of the present studies is that stochastic optimization in regard to cost component uncertainties need not be completed since deterministic optimization will identify nearly the same family of near-optimum cycling schemes
Directory of Open Access Journals (Sweden)
Fereydoun Naghibi
2016-12-01
Full Text Available This paper presents an advanced method in urban growth modeling to discover transition rules of cellular automata (CA using the artificial bee colony (ABC optimization algorithm. Also, comparisons between the simulation results of CA models optimized by the ABC algorithm and the particle swarm optimization algorithms (PSO as intelligent approaches were performed to evaluate the potential of the proposed methods. According to previous studies, swarm intelligence algorithms for solving optimization problems such as discovering transition rules of CA in land use change/urban growth modeling can produce reasonable results. Modeling of urban growth as a dynamic process is not straightforward because of the existence of nonlinearity and heterogeneity among effective involved variables which can cause a number of challenges for traditional CA. ABC algorithm, the new powerful swarm based optimization algorithms, can be used to capture optimized transition rules of CA. This paper has proposed a methodology based on remote sensing data for modeling urban growth with CA calibrated by the ABC algorithm. The performance of ABC-CA, PSO-CA, and CA-logistic models in land use change detection is tested for the city of Urmia, Iran, between 2004 and 2014. Validations of the models based on statistical measures such as overall accuracy, figure of merit, and total operating characteristic were made. We showed that the overall accuracy of the ABC-CA model was 89%, which was 1.5% and 6.2% higher than those of the PSO-CA and CA-logistic model, respectively. Moreover, the allocation disagreement (simulation error of the simulation results for the ABC-CA, PSO-CA, and CA-logistic models are 11%, 12.5%, and 17.2%, respectively. Finally, for all evaluation indices including running time, convergence capability, flexibility, statistical measurements, and the produced spatial patterns, the ABC-CA model performance showed relative improvement and therefore its superiority was
Dardzinska, Agnieszka
2013-01-01
We are surrounded by data, numerical, categorical and otherwise, which must to be analyzed and processed to convert it into information that instructs, answers or aids understanding and decision making. Data analysts in many disciplines such as business, education or medicine, are frequently asked to analyze new data sets which are often composed of numerous tables possessing different properties. They try to find completely new correlations between attributes and show new possibilities for users. Action rules mining discusses some of data mining and knowledge discovery principles and then describe representative concepts, methods and algorithms connected with action. The author introduces the formal definition of action rule, notion of a simple association action rule and a representative action rule, the cost of association action rule, and gives a strategy how to construct simple association action rules of a lowest cost. A new approach for generating action rules from datasets with numerical attributes...
Directory of Open Access Journals (Sweden)
Chongfeng Ren
2018-04-01
Full Text Available Water competing conflict among water competing sectors from different levels should be taken under consideration during the optimization allocation of water resources. Furthermore, uncertainties are inevitable in the optimization allocation of water resources. In order to deal with the above problems, this study developed a fuzzy max–min decision bi-level fuzzy programming model. The developed model was then applied to a case study in Wuwei, Gansu Province, China. In this study, the net benefit and yield were regarded as the upper-level and lower-level objectives, respectively. Optimal water resource plans were obtained under different possibility levels of fuzzy parameters, which could deal with water competing conflict between the upper level and the lower level effectively. The obtained results are expected to make great contribution in helping local decision-makers to make decisions on dealing with the water competing conflict between the upper and lower level and the optimal use of water resources under uncertainty.
ForEx++: A New Framework for Knowledge Discovery from Decision Forests
Directory of Open Access Journals (Sweden)
Md Nasim Adnan
2017-11-01
Full Text Available Decision trees are popularly used in a wide range of real world problems for both prediction and classification (logic rules discovery. A decision forest is an ensemble of decision trees and it is often built for achieving better predictive performance compared to a single decision tree. Besides improving predictive performance, a decision forest can be seen as a pool of logic rules (rules with great potential for knowledge discovery. However, a standard-sized decision forest usually generates a large number of rules that a user may not able to manage for effective knowledge analysis. In this paper, we propose a new, data set independent framework for extracting those rules that are comparatively more accurate, generalized and concise than others. We apply the proposed framework on rules generated by two different decision forest algorithms from some publicly available medical related data sets on dementia and heart disease. We then compare the quality of rules extracted by the proposed framework with rules generated from a single J48 decision tree and rules extracted by another recent method. The results reported in this paper demonstrate the effectiveness of the proposed framework.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making
Directory of Open Access Journals (Sweden)
Sabine Prezenski
2017-08-01
Full Text Available Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from
Working-memory load and temporal myopia in dynamic decision making.
Worthy, Darrell A; Otto, A Ross; Maddox, W Todd
2012-11-01
We examined the role of working memory (WM) in dynamic decision making by having participants perform decision-making tasks under single-task or dual-task conditions. In 2 experiments participants performed dynamic decision-making tasks in which they chose 1 of 2 options on each trial. The decreasing option always gave a larger immediate reward but caused future rewards for both options to decrease. The increasing option always gave a smaller immediate reward but caused future rewards for both options to increase. In each experiment we manipulated the reward structure such that the decreasing option was the optimal choice in 1 condition and the increasing option was the optimal choice in the other condition. Behavioral results indicated that dual-task participants selected the immediately rewarding decreasing option more often, and single-task participants selected the increasing option more often, regardless of which option was optimal. Thus, dual-task participants performed worse on 1 type of task but better on the other type. Modeling results showed that single-task participants' data were most often best fit by a win-stay, lose-shift (WSLS) rule-based model that tracked differences across trials, and dual-task participants' data were most often best fit by a Softmax reinforcement learning model that tracked recency-weighted average rewards for each option. This suggests that manipulating WM load affects the degree to which participants focus on the immediate versus delayed consequences of their actions and whether they employ a rule-based WSLS strategy, but it does not necessarily affect how well people weigh the immediate versus delayed benefits when determining the long-term utility of each option.
Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints
Dongnan, L.; Wan, W.; Zhao, J.
2017-12-01
Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.
Directory of Open Access Journals (Sweden)
Bréart Gérard
2008-07-01
Full Text Available Abstract Background Numerous short-statured children are evaluated for growth hormone (GH deficiency (GHD. In most patients, GH provocative tests are normal and are thus in retrospect unnecessary. Methods A retrospective cohort study was conducted to identify predictors of growth hormone (GH deficiency (GHD in children seen for short stature, and to construct a very sensitive and fairly specific predictive tool to avoid unnecessary GH provocative tests. GHD was defined by the presence of 2 GH concentration peaks Results The initial study included 167 patients, 36 (22% of whom had GHD, including 5 (3% with certain GHD. Independent predictors of GHD were: growth rate Conclusion We have derived and performed an internal validation of a highly sensitive decision rule that could safely help to avoid more than 2/3 of the unnecessary GH tests. External validation of this rule is needed before any application.
Learning a New Selection Rule in Visual and Frontal Cortex
van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R
2016-01-01
How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the
Extraction of Static and Dynamic Reservoir Operation Rules by Genetic Programming
Directory of Open Access Journals (Sweden)
Habib Akbari Alashti
2014-11-01
Full Text Available Considering the necessity of desirable operation of limited water resources and assuming the significant role of dams in controlling and consuming the surface waters, highlights the advantageous of suitable operation rules for optimal and sustainable operation of dams. This study investigates the hydroelectric supply of a one-reservoir system of Karoon3 using nonlinear programming (NLP, genetic algorithm (GA, genetic programming (GP and fixed length gen GP (FLGGP in real-time operation of dam considering two approaches of static and dynamic operation rules. In static operation rule, only one rule curve is extracted for all months in a year whereas in dynamic operation rule, monthly rule curves (12 rules are extracted for each month of a year. In addition, nonlinear decision rule (NLDR curves are considered, and the total deficiency function as the target (objective function have been used for evaluating the performance of each method and approach. Results show appropriate efficiency of GP and FLGGP methods in extracting operation rules in both approaches. Superiority of these methods to operation methods yielded by GA and NLP is 5%. Moreover, according to the results, it can be remarked that, FLGGP method is an alternative for GP method, whereas the GP method cannot be used due to its limitations. Comparison of two approaches of static and dynamic operation rules demonstrated the superiority of dynamic operation rule to static operation rule (about 10% and therefore this method has more capabilities in real-time operation of the reservoirs systems.
Affective and cognitive decision-making in adolescents.
van Duijvenvoorde, Anna C K; Jansen, Brenda R J; Visser, Ingmar; Huizenga, Hilde M
2010-01-01
Adolescents demonstrate impaired decision-making in emotionally arousing situations, yet they appear to exhibit relatively mature decision-making skills in predominantly cognitive, low-arousal situations. In this study we compared adolescents' (13-15 years) performance on matched affective and cognitive decision-making tasks, in order to determine (1) their performance level on each task and (2) whether performance on the cognitive task was associated with performance on the affective task. Both tasks required a comparison of choice dimensions characterized by frequency of loss, amount of loss, and constant gain. Results indicated that in the affective task, adolescents performed sub-optimally by considering only the frequency of loss, whereas in the cognitive task adolescents used relatively mature decision rules by considering two or all three choice dimensions. Performance on the affective task was not related to performance on the cognitive task. These results are discussed in light of neural developmental trajectories observed in adolescence.
Stygar, A H; Kristensen, A R; Makulska, J
2014-08-01
The aim of this study was to provide farmers an efficient tool for supporting optimal decisions in the beef heifer rearing process. The complexity of beef heifer management prompted the development of a model including decisions on the feeding level during prepuberty (age optimal rearing strategy was found by maximizing the total discounted net revenues from the predicted future productivity of the Polish Limousine heifers defined as the cumulative BW of calves born from a cow calved until the age of 5 yr, standardized on the 210th day of age. According to the modeled optimal policy, heifers fed during the whole rearing period at the ADG of 810 g/d and generally weaned after the maximum suckling period of 9 mo should already be bred at the age of 13.2 mo and BW constituting 55.6% of the average mature BW. Based on the optimal strategy, 52% of all heifers conceived from May to July and calved from February to April. This optimal rearing pattern resulted in an average net return of EUR 311.6 per pregnant heifer. It was found that the economic efficiency of beef operations can be improved by applying different herd management practices to those currently used in Poland. Breeding at 55.6% of the average mature BW, after a shorter and less expensive rearing period, resulted in an increase in the average net return per heifer by almost 18% compared to the conventional system, in which heifers were bred after attaining 65% of the mature BW. Extension of the rearing period by 2.5 mo (breeding at the age 15.7 mo), due to a prepubertal growth rate lowered by 200 g, reduced the average net return per heifer by 6.2% compared to the results obtained under the basic model assumptions. In the future, the model may also be extended to investigate additional aspects of the beef heifer development, such as the environmental impacts of various heifer management decisions.
Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1998-03-01
This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.
Optimal decision procedures for satisfiability in fragments of alternating-time temporal logics
DEFF Research Database (Denmark)
Goranko, Valentin; Vester, Steen
2014-01-01
We consider several natural fragments of the alternating-time temporal logics ATL*and ATL with restrictions on the nesting between temporal operators and strate-gicquantifiers. We develop optimal decision procedures for satisfiability in these fragments, showing that they have much lower complexi...
Rajesh Kumar; S.C. Kaushik; Raj Kumar; Ranjana Hans
2016-01-01
Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is s...
A Multiswarm Optimizer for Distributed Decision Making in Virtual Enterprise Risk Management
Directory of Open Access Journals (Sweden)
Yichuan Shao
2012-01-01
Full Text Available We develop an optimization model for risk management in a virtual enterprise environment based on a novel multiswarm particle swarm optimizer called PS2O. The main idea of PS2O is to extend the single population PSO to the interacting multiswarms model by constructing hierarchical interaction topology and enhanced dynamical update equations. With the hierarchical interaction topology, a suitable diversity in the whole population can be maintained. At the same time, the enhanced dynamical update rule significantly speeds up the multiswarm to converge to the global optimum. With five mathematical benchmark functions, PS2O is proved to have considerable potential for solving complex optimization problems. PS2O is then applied to risk management in a virtual enterprise environment. Simulation results demonstrate that the PS2O algorithm is more feasible and efficient than the PSO algorithm in solving this real-world problem.
Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions
Azad, Mohammad
2014-09-13
The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.
Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions
Azad, Mohammad; Moshkov, Mikhail
2014-01-01
The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.
Design and development of bio-inspired framework for reservoir operation optimization
Asvini, M. Sakthi; Amudha, T.
2017-12-01
Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.
Directory of Open Access Journals (Sweden)
Flávio da Silva Andrade
2017-10-01
Full Text Available This article concerns a topic that is not new, but it remains current: the participatory construction of the criminal decision in a democratic State ruled by law. Starting from the concepts of Rule of Law, of Guarantism and of Democracy, it seeks to renew the importance of the equal and dialectical participation of the parties, through the adversarial system, for the composition of a fair and legitimate criminal judicial decision. It is argued, from this perspective, that the parties should take the role of protagonists in the procedural scenario, since the decision should be built in a participatory way, i.e., based on the arguments and evidence presented, thus reducing the gaps that favor judicial discretion and decisionism. It is proposed, therefore, that the solution to the concrete case (acceptance or dismissal of the information or indictment, grant or rejection of a criminal precautionary measure, conviction or acquittal should be elaborated with support on the contribution of the litigants, from the contrast of their arguments and of the evidence produced, in adversarial proceedings, in the regular course of the process.
Decision-making under surprise and uncertainty: Arsenic contamination of water supplies
Randhir, Timothy O.; Mozumder, Pallab; Halim, Nafisa
2018-05-01
With ignorance and potential surprise dominating decision making in water resources, a framework for dealing with such uncertainty is a critical need in hydrology. We operationalize the 'potential surprise' criterion proposed by Shackle, Vickers, and Katzner (SVK) to derive decision rules to manage water resources under uncertainty and ignorance. We apply this framework to managing water supply systems in Bangladesh that face severe, naturally occurring arsenic contamination. The uncertainty involved with arsenic in water supplies makes the application of conventional analysis of decision-making ineffective. Given the uncertainty and surprise involved in such cases, we find that optimal decisions tend to favor actions that avoid irreversible outcomes instead of conventional cost-effective actions. We observe that a diversification of the water supply system also emerges as a robust strategy to avert unintended outcomes of water contamination. Shallow wells had a slight higher optimal level (36%) compare to deep wells and surface treatment which had allocation levels of roughly 32% under each. The approach can be applied in a variety of other cases that involve decision making under uncertainty and surprise, a frequent situation in natural resources management.
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Electricity Purchase Optimization Decision Based on Data Mining and Bayesian Game
Directory of Open Access Journals (Sweden)
Yajing Gao
2018-04-01
Full Text Available The openness of the electricity retail market results in the power retailers facing fierce competition in the market. This article aims to analyze the electricity purchase optimization decision-making of each power retailer with the background of the big data era. First, in order to guide the power retailer to make a purchase of electricity, this paper considers the users’ historical electricity consumption data and a comprehensive consideration of multiple factors, then uses the wavelet neural network (WNN model based on “meteorological similarity day (MSD” to forecast the user load demand. Second, in order to guide the quotation of the power retailer, this paper considers the multiple factors affecting the electricity price to cluster the sample set, and establishes a Genetic algorithm- back propagation (GA-BP neural network model based on fuzzy clustering (FC to predict the short-term market clearing price (MCP. Thirdly, based on Sealed-bid Auction (SA in game theory, a Bayesian Game Model (BGM of the power retailer’s bidding strategy is constructed, and the optimal bidding strategy is obtained by obtaining the Bayesian Nash Equilibrium (BNE under different probability distributions. Finally, a practical example is proposed to prove that the model and method can provide an effective reference for the decision-making optimization of the sales company.
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Directory of Open Access Journals (Sweden)
Rajesh Kumar
2016-06-01
Full Text Available Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is selected using Fuzzy, TOPSIS, LINMAP and Shannon’s entropy decision making methods. Triple objective evolutionary approach applied to the proposed model gives power output, thermal efficiency, ecological function as (53.89 kW, 0.1611, −142 kW which are 29.78%, 25.86% and 21.13% lower in comparison with reversible system. Furthermore, the present study reflects the effect of various heat capacitance rates and component efficiencies on triple objectives in graphical custom. Finally, with the aim of error investigation, average and maximum errors of obtained results are computed.
Directory of Open Access Journals (Sweden)
Alik Abakarov
2013-04-01
Full Text Available The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demonstrated using experimental data obtained on osmotic dehydration of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses, namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality. Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP and the Tabular Method (TM, were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.
How to Assist Formalization of NL Regulations: Lessons from Business Rules Acquisition Experiments
Nazarenko , Adeline
2013-01-01
International audience; Decision systems usually rely on a set of business rules that describe the expected behavior of a system or an organization and that determine the decisions to be taken in different situations. However, rule acquisition is often the bottleneck that hinders the development of decision systems. When these rules are based on regulations written in Natural Language (NL), one solution is to derive formal business rules from the source documents. This approach also allows ch...
Directory of Open Access Journals (Sweden)
Hunziker Roger
2011-05-01
Full Text Available Abstract Background Physicians fear missing cases of pneumonia and treat many patients with signs of respiratory infection unnecessarily with antibiotics. This is an avoidable cause for the increasing worldwide problem of antibiotic resistance. We developed a user-friendly decision aid to rule out pneumonia and thus reduce the rate of needless prescriptions of antibiotics. Methods This was a prospective cohort study in which we enrolled patients older than 18 years with a new or worsened cough and fever without serious co-morbidities. Physicians recorded results of a standardized medical history and physical examination. C-reactive protein was measured and chest radiographs were obtained. We used Classification and Regression Trees to derive the decision tool. Results A total of 621 consenting eligible patients were studied, 598 were attending a primary care facility, were 48 years on average and 50% were male. Radiographic signs for pneumonia were present in 127 (20.5% of patients. Antibiotics were prescribed to 234 (48.3% of patients without pneumonia. In patients with C-reactive protein values below 10 μg/ml or patients presenting with C-reactive protein between 11 and 50 μg/ml, but without dyspnoea and daily fever, pneumonia can be ruled out. By applying this rule in clinical practice antibiotic prescription could be reduced by 9.1% (95% confidence interval (CI: 6.4 to 11.8. Conclusions Following validation and confirmation in new patient samples, this tool could help rule out pneumonia and be used to reduce unnecessary antibiotic prescriptions in patients presenting with cough and fever in primary care. The algorithm might be especially useful in those instances where taking a medical history and physical examination alone are inconclusive for ruling out pneumonia
Do different methods of modeling statin treatment effectiveness influence the optimal decision?
B.J.H. van Kempen (Bob); B.S. Ferket (Bart); A. Hofman (Albert); S. Spronk (Sandra); E.W. Steyerberg (Ewout); M.G.M. Hunink (Myriam)
2012-01-01
textabstractPurpose. Modeling studies that evaluate statin treatment for the prevention of cardiovascular disease (CVD) use different methods to model the effect of statins. The aim of this study was to evaluate the impact of using different modeling methods on the optimal decision found in such
Directory of Open Access Journals (Sweden)
S. K. M. Abujayyab
2015-10-01
Full Text Available This paper briefly introduced the theory and framework of geospatial site selection (GSS and discussed the application and framework of artificial neural networks (ANNs. The related literature on the use of ANNs as decision rules in GSS is scarce from 2000 till 2015. As this study found, ANNs are not only adaptable to dynamic changes but also capable of improving the objectivity of acquisition in GSS, reducing time consumption, and providing high validation. ANNs make for a powerful tool for solving geospatial decision-making problems by enabling geospatial decision makers to implement their constraints and imprecise concepts. This tool offers a way to represent and handle uncertainty. Specifically, ANNs are decision rules implemented to enhance conventional GSS frameworks. The main assumption in implementing ANNs in GSS is that the current characteristics of existing sites are indicative of the degree of suitability of new locations with similar characteristics. GSS requires several input criteria that embody specific requirements and the desired site characteristics, which could contribute to geospatial sites. In this study, the proposed framework consists of four stages for implementing ANNs in GSS. A multilayer feed-forward network with a backpropagation algorithm was used to train the networks from prior sites to assess, generalize, and evaluate the outputs on the basis of the inputs for the new sites. Two metrics, namely, confusion matrix and receiver operating characteristic tests, were utilized to achieve high accuracy and validation. Results proved that ANNs provide reasonable and efficient results as an accurate and inexpensive quantitative technique for GSS.
Wattel, P.J.; Richelle, I.; Schön, W.; Traversa, E.
2016-01-01
The Commission State aid decisions on individual tax rulings have created legal uncertainty, which may have been one of their goals. This article comments on their political and policy merits and effects, it wonders whether EU law requires member States to have—and apply in a certain manner—specific
Making optimal investment decisions for energy service companies under uncertainty: A case study
International Nuclear Information System (INIS)
Deng, Qianli; Jiang, Xianglin; Zhang, Limao; Cui, Qingbin
2015-01-01
Varied initial energy efficiency investments would result in different annual energy savings achievements. In order to balance the savings revenue and the potential capital loss through EPC (Energy Performance Contracting), a cost-effective investment decision is needed when selecting energy efficiency technologies. In this research, an approach is developed for the ESCO (Energy Service Company) to evaluate the potential energy savings profit, and thus make the optimal investment decisions. The energy savings revenue under uncertainties, which are derived from energy efficiency performance variation and energy price fluctuation, are first modeled as stochastic processes. Then, the derived energy savings profit is shared by the owner and the ESCO according to the contract specification. A simulation-based model is thus built to maximize the owner's profit, and at the same time, satisfy the ESCO's expected rate of return. In order to demonstrate the applicability of the proposed approach, the University of Maryland campus case is also presented. The proposed method could not only help the ESCO determine the optimal energy efficiency investments, but also assist the owner's decision in the bidding selection. - Highlights: • An optimization model is built for determining energy efficiency investment for ESCO. • Evolution of the energy savings revenue is modeled as a stochastic process. • Simulation is adopted to calculate investment balancing the owner and the ESCO's profit. • A campus case is presented to demonstrate applicability of the proposed approach
Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables
Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail
2013-01-01
In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization
A model of reward- and effort-based optimal decision making and motor control.
Directory of Open Access Journals (Sweden)
Lionel Rigoux
Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.
Dynamic Programming Approach for Construction of Association Rule Systems
Alsolami, Fawaz
2016-11-18
In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.
Dynamic Programming Approach for Construction of Association Rule Systems
Alsolami, Fawaz; Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2016-01-01
In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.
Optimal decision making and matching are tied through diminishing returns.
Kubanek, Jan
2017-08-08
How individuals make decisions has been a matter of long-standing debate among economists and researchers in the life sciences. In economics, subjects are viewed as optimal decision makers who maximize their overall reward income. This framework has been widely influential, but requires a complete knowledge of the reward contingencies associated with a given choice situation. Psychologists and ecologists have observed that individuals tend to use a simpler "matching" strategy, distributing their behavior in proportion to relative rewards associated with their options. This article demonstrates that the two dominant frameworks of choice behavior are linked through the law of diminishing returns. The relatively simple matching can in fact provide maximal reward when the rewards associated with decision makers' options saturate with the invested effort. Such saturating relationships between reward and effort are hallmarks of the law of diminishing returns. Given the prevalence of diminishing returns in nature and social settings, this finding can explain why humans and animals so commonly behave according to the matching law. The article underscores the importance of the law of diminishing returns in choice behavior.
Data-Driven Markov Decision Process Approximations for Personalized Hypertension Treatment Planning
Directory of Open Access Journals (Sweden)
Greggory J. Schell PhD
2016-10-01
Full Text Available Background: Markov decision process (MDP models are powerful tools. They enable the derivation of optimal treatment policies but may incur long computational times and generate decision rules that are challenging to interpret by physicians. Methods: In an effort to improve usability and interpretability, we examined whether Poisson regression can approximate optimal hypertension treatment policies derived by an MDP for maximizing a patient’s expected discounted quality-adjusted life years. Results: We found that our Poisson approximation to the optimal treatment policy matched the optimal policy in 99% of cases. This high accuracy translates to nearly identical health outcomes for patients. Furthermore, the Poisson approximation results in 104 additional quality-adjusted life years per 1000 patients compared to the Seventh Joint National Committee’s treatment guidelines for hypertension. The comparative health performance of the Poisson approximation was robust to the cardiovascular disease risk calculator used and calculator calibration error. Limitations: Our results are based on Markov chain modeling. Conclusions: Poisson model approximation for blood pressure treatment planning has high fidelity to optimal MDP treatment policies, which can improve usability and enhance transparency of more personalized treatment policies.
Decision and Inhibitory Trees for Decision Tables with Many-Valued Decisions
Azad, Mohammad
2018-06-06
Decision trees are one of the most commonly used tools in decision analysis, knowledge representation, machine learning, etc., for its simplicity and interpretability. We consider an extension of dynamic programming approach to process the whole set of decision trees for the given decision table which was previously only attainable by brute-force algorithms. We study decision tables with many-valued decisions (each row may contain multiple decisions) because they are more reasonable models of data in many cases. To address this problem in a broad sense, we consider not only decision trees but also inhibitory trees where terminal nodes are labeled with “̸= decision”. Inhibitory trees can sometimes describe more knowledge from datasets than decision trees. As for cost functions, we consider depth or average depth to minimize time complexity of trees, and the number of nodes or the number of the terminal, or nonterminal nodes to minimize the space complexity of trees. We investigate the multi-stage optimization of trees relative to some cost functions, and also the possibility to describe the whole set of strictly optimal trees. Furthermore, we study the bi-criteria optimization cost vs. cost and cost vs. uncertainty for decision trees, and cost vs. cost and cost vs. completeness for inhibitory trees. The most interesting application of the developed technique is the creation of multi-pruning and restricted multi-pruning approaches which are useful for knowledge representation and prediction. The experimental results show that decision trees constructed by these approaches can often outperform the decision trees constructed by the CART algorithm. Another application includes the comparison of 12 greedy heuristics for single- and bi-criteria optimization (cost vs. cost) of trees. We also study the three approaches (decision tables with many-valued decisions, decision tables with most common decisions, and decision tables with generalized decisions) to handle
Integrated Case Based and Rule Based Reasoning for Decision Support
Eshete, Azeb Bekele
2009-01-01
This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rule based reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rule based reasoning method with an existing case based system. This project focu...
Voting systems for environmental decisions.
Burgman, Mark A; Regan, Helen M; Maguire, Lynn A; Colyvan, Mark; Justus, James; Martin, Tara G; Rothley, Kris
2014-04-01
Voting systems aggregate preferences efficiently and are often used for deciding conservation priorities. Desirable characteristics of voting systems include transitivity, completeness, and Pareto optimality, among others. Voting systems that are common and potentially useful for environmental decision making include simple majority, approval, and preferential voting. Unfortunately, no voting system can guarantee an outcome, while also satisfying a range of very reasonable performance criteria. Furthermore, voting methods may be manipulated by decision makers and strategic voters if they have knowledge of the voting patterns and alliances of others in the voting populations. The difficult properties of voting systems arise in routine decision making when there are multiple criteria and management alternatives. Because each method has flaws, we do not endorse one method. Instead, we urge organizers to be transparent about the properties of proposed voting systems and to offer participants the opportunity to approve the voting system as part of the ground rules for operation of a group. © 2014 The Authors. Conservation Biology published by Wiley Periodicals, Inc., on behalf of the Society for Conservation Biology.
The future of decision-making in critical care after Cuthbertson v. Rasouli.
Hawryluck, Laura; Baker, Andrew J; Faith, Andrew; Singh, Jeffrey M
2014-10-01
The Supreme Court of Canada (SCC) ruling on Cuthbertson v. Rasouli has implications for all acute healthcare providers. This well-publicized case involved a disagreement between healthcare providers and a patient's family regarding the principles surrounding withdrawal of life support, which the physicians involved considered no longer of medical benefit and outside the standard of care, and whether consent was required for such withdrawals. Our objective in writing this article is to clarify the implications of this ruling on the care of critically ill patients. SCC ruling Cuthbertson v. Rasouli. The SCC ruled that consent must be obtained for all treatments that serve a "health-related purpose", including withdrawal of such treatments. The SCC did not fully consider what the standard of care should be. Health-related purpose is not sufficient in and of itself to mandate treatment, and clinicians must still ensure that their patients or decision-makers are aware of the possible medical benefits, risks, and expected outcomes of treatments. The provision of treatments that have no potential to provide medical benefit and carry only risks would still fall outside the standard of care. Nevertheless, due to their health-related purpose, physicians must seek consent for the discontinuation of these treatments. The SCC ruled that due to the legal definition of "health-related purpose", which is distinct from medical benefit, consent is required to withdraw life-support and outlined the steps to be taken should conflict arise. The SCC decision did not directly address the role of medical standard of care in these situations. In order to ensure optimal decision-making and communication with patients and their families, it is critical for healthcare providers to have a clear understanding of the implications of this legal ruling on medical practice.
Directory of Open Access Journals (Sweden)
Yoichi Hayashi
2016-01-01
Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.
Directory of Open Access Journals (Sweden)
Daiki Min
2017-11-01
Full Text Available Recently, much research has focused on lowering carbon emissions in logistics. This paper attempts to contribute to the literature on the joint shipment size and carbon reduction decisions by developing novel models for distribution systems under direct shipment and peddling distribution strategies. Unlike the literature that has simply investigated the effects of carbon costs on operational decisions, we address how to reduce carbon emissions and logistics costs by adjusting shipment size and making an optimal decision on carbon reduction investment. An optimal decision is made by analyzing the distribution cost including not only logistics and carbon trading costs but also the cost for adjusting carbon emission factors. No research has explicitly considered the two sources of carbon emissions, but we develop a model covering the difference in managing carbon emissions from transportation and storage. Structural analysis guides how to determine an optimal shipment size and emission factors in a closed form. Moreover, we analytically prove the possibility of reducing the distribution cost and carbon emissions at the same time. Numerical analysis follows validation of the results and demonstrates some interesting findings on carbon and distribution cost reduction.
Modified Dempster-Shafer approach using an expected utility interval decision rule
Cheaito, Ali; Lecours, Michael; Bosse, Eloi
1999-03-01
The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.
Fuzzy multiobjective models for optimal operation of a hydropower system
Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.
2013-06-01
Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.
Decision support models for natural gas dispatch
Energy Technology Data Exchange (ETDEWEB)
Chin, L. (Bentley College, Waltham, MA (United States)); Vollmann, T.E. (International Inst. for Management Development, Lausanne (Switzerland))
A decision support model is presented which will give utilities the support tools to manage the purchasing of natural gas supplies in the most cost effective manner without reducing winter safety stocks to below minimum levels. In Business As Usual (BAU) purchasing quantities vary with the daily forecasts. With Material Requirements Planning (MRP) and Linear Programming (LP), two types of factors are used: seasonal weather and decision rule. Under current practices, BAU simulation uses the least expensive gas source first, then adding successively more expensive sources. Material Requirements Planning is a production planning technique which uses a parent item master production schedule to determine time phased requirements for component points. Where the MPS is the aggregate gas demand forecasts for the contract year. This satisfies daily demand with least expensive gas and uses more expensive when necessary with automatic computation of available-to-promise (ATP) gas a dispacher knows daily when extra gas supplies may be ATP. Linear Programming is a mathematical algorithm used to determine optimal allocations of scarce resources to achieve a desired result. The LP model determines optimal daily gas purchase decisions with respect to supply cost minimization. Using these models, it appears possible to raise gross income margins 6 to 10% with minimal additions of customers and no new gas supply.
Decision support models for natural gas dispatch
International Nuclear Information System (INIS)
Chin, L.; Vollmann, T.E.
1992-01-01
A decision support model is presented which will give utilities the support tools to manage the purchasing of natural gas supplies in the most cost effective manner without reducing winter safety stocks to below minimum levels. In Business As Usual (BAU) purchasing quantities vary with the daily forecasts. With Material Requirements Planning (MRP) and Linear Programming (LP), two types of factors are used: seasonal weather and decision rule. Under current practices, BAU simulation uses the least expensive gas source first, then adding successively more expensive sources. Material Requirements Planning is a production planning technique which uses a parent item master production schedule to determine time phased requirements for component points. Where the MPS is the aggregate gas demand forecasts for the contract year. This satisfies daily demand with least expensive gas and uses more expensive when necessary with automatic computation of available-to-promise (ATP) gas a dispacher knows daily when extra gas supplies may be ATP. Linear Programming is a mathematical algorithm used to determine optimal allocations of scarce resources to achieve a desired result. The LP model determines optimal daily gas purchase decisions with respect to supply cost minimization. Using these models, it appears possible to raise gross income margins 6 to 10% with minimal additions of customers and no new gas supply
Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations
Papadouris, Nicos
2012-01-01
This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…
Directory of Open Access Journals (Sweden)
Sandrine Leroy
Full Text Available Predicting vesico-ureteral reflux (VUR ≥3 at the time of the first urinary tract infection (UTI would make it possible to restrict cystography to high-risk children. We previously derived the following clinical decision rule for that purpose: cystography should be performed in cases with ureteral dilation and a serum procalcitonin level ≥0.17 ng/mL, or without ureteral dilatation when the serum procalcitonin level ≥0.63 ng/mL. The rule yielded a 86% sensitivity with a 46% specificity. We aimed to test its reproducibility.A secondary analysis of prospective series of children with a first UTI. The rule was applied, and predictive ability was calculated.The study included 413 patients (157 boys, VUR ≥3 in 11% from eight centers in five countries. The rule offered a 46% specificity (95% CI, 41-52, not different from the one in the derivation study. However, the sensitivity significantly decreased to 64% (95%CI, 50-76, leading to a difference of 20% (95%CI, 17-36. In all, 16 (34% patients among the 47 with VUR ≥3 were misdiagnosed by the rule. This lack of reproducibility might result primarily from a difference between derivation and validation populations regarding inflammatory parameters (CRP, PCT; the validation set samples may have been collected earlier than for the derivation one.The rule built to predict VUR ≥3 had a stable specificity (ie. 46%, but a decreased sensitivity (ie. 64% because of the time variability of PCT measurement. Some refinement may be warranted.
Rule Induction-Based Knowledge Discovery for Energy Efficiency
Chen, Qipeng; Fan, Zhong; Kaleshi, Dritan; Armour, Simon M D
2015-01-01
Rule induction is a practical approach to knowledge discovery. Provided that a problem is developed, rule induction is able to return the knowledge that addresses the goal of this problem as if-then rules. The primary goals of knowledge discovery are for prediction and description. The rule format knowledge representation is easily understandable so as to enable users to make decisions. This paper presents the potential of rule induction for energy efficiency. In particular, three rule induct...
Weather Avoidance Using Route Optimization as a Decision Aid: An AWIN Topical Study. Phase 1
1998-01-01
The aviation community is faced with reducing the fatal aircraft accident rate by 80 percent within 10 years. This must be achieved even with ever increasing, traffic and a changing National Airspace System. This is not just an altruistic goal, but a real necessity, if our growing level of commerce is to continue. Honeywell Technology Center's topical study, "Weather Avoidance Using Route Optimization as a Decision Aid", addresses these pressing needs. The goal of this program is to use route optimization and user interface technologies to develop a prototype decision aid for dispatchers and pilots. This decision aid will suggest possible diversions through single or multiple weather hazards and present weather information with a human-centered design. At the conclusion of the program, we will have a laptop prototype decision aid that will be used to demonstrate concepts to industry for integration into commercialized products for dispatchers and/or pilots. With weather a factor in 30% of aircraft accidents, our program will prevent accidents by strategically avoiding weather hazards in flight. By supplying more relevant weather information in a human-centered format along with the tools to generate flight plans around weather, aircraft exposure to weather hazards can be reduced. Our program directly addresses the NASA's five year investment areas of Strategic Weather Information and Weather Operations (simulation/hazard characterization and crew/dispatch/ATChazard monitoring, display, and decision support) (NASA Aeronautics Safety Investment Strategy: Weather Investment Recommendations, April 15, 1997). This program is comprised of two phases, Phase I concluded December 31, 1998. This first phase defined weather data requirements, lateral routing algorithms, an conceptual displays for a user-centered design. Phase II runs from January 1999 through September 1999. The second phase integrates vertical routing into the lateral optimizer and combines the user
Diagnostic tests’ decision-making rules based upon analysis of ROC-curves
Directory of Open Access Journals (Sweden)
Л. В. Батюк
2015-10-01
Full Text Available In this paper we propose the model which substantiates diagnostics decision making based on the analysis of Receiver Operating Characteristic curves (ROC-curves and predicts optimal values of diagnostic indicators of biomedical information. To assess the quality of the test result prediction the standard criteria of the sensitivity and specificity of the model were used. Values of these criteria were calculated for the cases when the sensitivity of the test was greater than specificity by several times, when the number of correct diagnoses was maximal, when the sensitivity of the test was equal to its specificity and the sensitivity of the test was several times greater than the specificity of the test. To assess the significance of the factor characteristics and to compare the prognostic characteristics of models we used mathematical modeling and plotting the ROC-curves. The optimal value of the diagnostic indicator was found to be achieved when the sensitivity of the test is equal to its specificity. The model was adapted to solve the case when the sensitivity of the test is greater than specificity of the test.
Optimal decisions and comparison of VMI and CPFR under price-sensitive uncertain demand
Directory of Open Access Journals (Sweden)
Yasaman Kazemi
2013-06-01
Full Text Available Purpose: The purpose of this study is to compare the performance of two advanced supply chain coordination mechanisms, Vendor Managed Inventory (VMI and Collaborative Planning Forecasting and Replenishment (CPFR, under a price-sensitive uncertain demand environment, and to make the optimal decisions on retail price and order quantity for both mechanisms. Design/ methodology/ approach: Analytical models are first applied to formulate a profit maximization problem; furthermore, by applying simulation optimization solution procedures, the optimal decisions and performance comparisons are accomplished. Findings: The results of the case study supported the widely held view that more advanced coordination mechanisms yield greater supply chain profit than less advanced ones. Information sharing does not only increase the supply chain profit, but also is required for the coordination mechanisms to achieve improved performance. Research limitations/implications: This study considers a single vendor and a single retailer in order to simplify the supply chain structure for modeling. Practical implications: Knowledge obtained from this study about the conditions appropriate for each specific coordination mechanism and the exact functions of coordination programs is critical to managerial decisions for industry practitioners who may apply the coordination mechanisms considered. Originality/value: This study includes the production cost in Economic Order Quantity (EOQ equations and combines it with price-sensitive demand under stochastic settings while comparing VMI and CPFR supply chain mechanisms and maximizing the total profit. Although many studies have worked on information sharing within the supply chain, determining the performance measures when the demand is price-sensitive and stochastic was not reported by researchers in the past literature.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What are the rules for reopening a decision by an administrative law judge of the Office of Medicare Hearings and Appeals (OMHA) or by the Medicare Appeals Council (MAC)? 418.1355 Section 418.1355 Employees' Benefits SOCIAL SECURITY ADMINISTRATION MEDICARE SUBSIDIES Medicare Part B...
Gower, Robert M.
2018-02-12
We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.
INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.
Villar, Sofía S
2016-01-01
Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.
TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.
Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald
2018-01-01
Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.
International Nuclear Information System (INIS)
Avenhaus, R.
1992-01-01
In the beginning of nuclear material safeguards, emphasis was placed on safe detection of diversion of nuclear material. Later, the aspect of timely detection became equally important. Since there is a trade-off between these two objectives, the question of an appropriate compromise was raised. In this paper, a decision theoretical framework is presented in which the objectives of the two players, inspector and inspectee, are expressed in terms of general utility functions. Within this framework, optimal safeguards strategies are defined, and furthermore, conditions are formulated under which the optimization criteria corresponding to the objectives mentioned above can be justified
A preference aggregation model and application in AHP-group decision making
Yang, Taiyi; Yang, De; Chao, Xiangrui
2018-04-01
Group decision making process integrate individual preferences to obtain the group preference by applying aggregation rules and preference relations. The two most useful approaches, the aggregation of individual judgements and the aggregation of individual priorities, traditionally are employed in the Analytic Hierarchy Process to deal with group decision making problems. In both cases, it is assumed that the group preference is approximate weighted mathematical expectation of individual judgements and individual priorities. We propose new preference aggregation methods using optimization models in order to obtain group preference which is close to all individual priorities. Some illustrative examples are finally examined to demonstrate proposed models for application.
Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes
Czech Academy of Sciences Publication Activity Database
Sladký, Karel
2013-01-01
Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf
Optimal Financing Decisions of Two Cash-Constrained Supply Chains with Complementary Products
Directory of Open Access Journals (Sweden)
Yuting Li
2016-04-01
Full Text Available In recent years; financing difficulties have been obsessed small and medium enterprises (SMEs; especially emerging SMEs. Inter-members’ joint financing within a supply chain is one of solutions for SMEs. How about members’ joint financing of inter-supply chains? In order to answer the question, we firstly employ the Stackelberg game to propose three kinds of financing decision models of two cash-constrained supply chains with complementary products. Secondly, we analyze qualitatively these models and find the joint financing decision of the two supply chains is the most optimal one. Lastly, we conduct some numerical simulations not only to illustrate above results but also to find that the larger are cross-price sensitivity coefficients; the higher is the motivation for participants to make joint financing decisions; and the more are profits for them to gain.
Energy Technology Data Exchange (ETDEWEB)
John, Oliver
2012-07-01
The author of the contribution under consideration reports on risk-based economic optimization of investment decisions of regulated power distribution system operators. The focus is the economically rational decision behavior of operators under certain regulatory requirements. Investments in power distribution systems form the items subject to decisions. Starting from a description of theoretical and practical regulatory approaches, their financial implications are quantified at first. On this basis, optimization strategies are derived with respect to the investment behavior. For this purpose, an optimization algorithm is developed and applied to exemplary companies. Finally, effects of uncertainties in regulatory systems are investigated. In this context, Monte Carlo simulations are used in conjunction with real options analysis.
Is expected utility theory normative for medical decision making?
Cohen, B J
1996-01-01
Expected utility theory is felt by its proponents to be a normative theory of decision making under uncertainty. The theory starts with some simple axioms that are held to be rules that any rational person would follow. It can be shown that if one adheres to these axioms, a numerical quantity, generally referred to as utility, can be assigned to each possible outcome, with the preferred course of action being that which has the highest expected utility. One of these axioms, the independence principle, is controversial, and is frequently violated in experimental situations. Proponents of the theory hold that these violations are irrational. The independence principle is simply an axiom dictating consistency among preferences, in that it dictates that a rational agent should hold a specified preference given another stated preference. When applied to preferences between lotteries, the independence principle can be demonstrated to be a rule that is followed only when preferences are formed in a particular way. The logic of expected utility theory is that this demonstration proves that preferences should be formed in this way. An alternative interpretation is that this demonstrates that the independence principle is not a valid general rule of consistency, but in particular, is a rule that must be followed if one is to consistently apply the decision rule "choose the lottery that has the highest expected utility." This decision rule must be justified on its own terms as a valid rule of rationality by demonstration that violation would lead to decisions that conflict with the decision maker's goals. This rule does not appear to be suitable for medical decisions because often these are one-time decisions in which expectation, a long-run property of a random variable, would not seem to be applicable. This is particularly true for those decisions involving a non-trivial risk of death.
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
Optimization of warehouse location through fuzzy multi-criteria decision making methods
Directory of Open Access Journals (Sweden)
C. L. Karmaker
2015-07-01
Full Text Available Strategic warehouse location-allocation problem is a multi-staged decision-making problem having both numerical and qualitative criteria. In order to survive in the global business scenario by improving supply chain performance, companies must examine the cross-functional drivers in the optimization of logistic systems. A meticulous observation makes evident that strategy warehouse location selection has become challenging as the number of alternatives and conflicting criteria increases. The issue becomes particularly problematic when the conventional concept has been applied in dealing with the imprecise nature of the linguistic assessment. The qualitative decisions for selection process are often complicated by the fact that often it is imprecise for the decision makers. Such problem must be overcome with defined efforts. Fuzzy multi-criteria decision making methods have been used in this research as aids in making location-allocation decisions. The anticipated methods in this research consist of two steps at its core. In the first step, the criteria of the existing problem are inspected and identified and then the weights of the sector and subsector are determined that have come to light by using Fuzzy AHP. In the second step, eligible alternatives are ranked by using TOPSIS and Fuzzy TOPSIS comparatively. A demonstration of the application of these methodologies in a real life problem is presented.
OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.
Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein
2018-01-01
Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.
Directory of Open Access Journals (Sweden)
Tinggui Chen
2013-01-01
Full Text Available Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.
Extending the horizons advances in computing, optimization, and decision technologies
Joseph, Anito; Mehrotra, Anuj; Trick, Michael
2007-01-01
Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.
Automatic generation of optimal business processes from business rules
Steen, B.; Ferreira Pires, Luis; Iacob, Maria Eugenia
2010-01-01
In recent years, business process models are increasingly being used as a means for business process improvement. Business rules can be seen as requirements for business processes, in that they describe the constraints that must hold for business processes that implement these business rules.
Directory of Open Access Journals (Sweden)
Mohammad Reza Bazargan-Lari
2011-01-01
Full Text Available Developing optimal operating policies for conjunctive use of surface and groundwater resources when different decision makers and stakeholders with conflicting objectives are involved is usually a challenging task. This problem would be more complex when objectives related to surface and groundwater quality are taken into account. In this paper, a new methodology is developed for real time conjunctive use of surface and groundwater resources. In the proposed methodology, a well-known multi-objective genetic algorithm, namely Non-dominated Sorting Genetic Algorithm II (NSGA-II is employed to develop a Pareto front among the objectives. The Young conflict resolution theory is also used for resolving the conflict of interests among decision makers. To develop the real time conjunctive use operating rules, the Probabilistic Support Vector Machines (PSVMs, which are capable of providing probability distribution functions of decision variables, are utilized. The proposed methodology is applied to Tehran Aquifer inTehran metropolitan area,Iran. Stakeholders in the study area have some conflicting interests including supplying water with acceptable quality, reducing pumping costs, improving groundwater quality and controlling the groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using the Young conflict resolution theory. The selected solution (optimal monthly operating policies is used to train and verify a PSVM. The results show the significance of applying an integrated conflict resolution approach and the capability of support vector machines for the real time conjunctive use of surface and groundwater resources in the study area. It is also shown that the validation accuracy of the proposed operating rules is higher that 80
Methodological approaches based on business rules
Directory of Open Access Journals (Sweden)
Anca Ioana ANDREESCU
2008-01-01
Full Text Available Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize the rules and move them outside the application. This paper analyzes and evaluates three well-known business rules approaches. It also outlines some critical factors that have to be taken into account in the decision to introduce business rules facilities in a software system. Based on the concept of explicit manipulation of business rules in a software system, the need for a general approach based on business rules is discussed.
Driver's Behavior and Decision-Making Optimization Model in Mixed Traffic Environment
Directory of Open Access Journals (Sweden)
Xiaoyuan Wang
2015-02-01
Full Text Available Driving process is an information treating procedure going on unceasingly. It is very important for the research of traffic flow theory, to study on drivers' information processing pattern in mixed traffic environment. In this paper, bicycle is regarded as a kind of information source to vehicle drivers; the “conflict point method” is brought forward to analyze the influence of bicycles on driving behavior. The “conflict” is studied to be translated into a special kind of car-following or lane-changing process. Furthermore, the computer clocked scan step length is dropped to 0.1 s, in order to scan and analyze the dynamic (static information which influences driving behavior in a more exact way. The driver's decision-making process is described through information fusion based on duality contrast and fuzzy optimization theory. The model test and verification show that the simulation results with the “conflict point method” and the field data are consistent basically. It is feasible to imitate driving behavior and the driver information fusion process with the proposed methods. Decision-making optimized process can be described more accurately through computer precision clocked scan strategy. The study in this paper can provide the foundation for further research of multiresource information fusion process of driving behavior.
Development of a fuzzy optimization model, supporting global warming decision-making
International Nuclear Information System (INIS)
Leimbach, M.
1996-01-01
An increasing number of models have been developed to support global warming response policies. The model constructors are facing a lot of uncertainties which limit the evidence of these models. The support of climate policy decision-making is only possible in a semi-quantitative way, as presented by a Fuzzy model. The model design is based on an optimization approach, integrated in a bounded risk decision-making framework. Given some regional emission-related and impact-related restrictions, optimal emission paths can be calculated. The focus is not only on carbon dioxide but on other greenhouse gases too. In the paper, the components of the model will be described. Cost coefficients, emission boundaries and impact boundaries are represented as Fuzzy parameters. The Fuzzy model will be transformed into a computational one by using an approach of Rommelfanger. In the second part, some problems of applying the model to computations will be discussed. This includes discussions on the data situation and the presentation, as well as interpretation of results of sensitivity analyses. The advantage of the Fuzzy approach is that the requirements regarding data precision are not so strong. Hence, the effort for data acquisition can be reduced and computations can be started earlier. 9 figs., 3 tabs., 17 refs., 1 appendix
Zhu, H.; Liu, H.W.; Ou, Carol; Davison, R.M.; Yang, Z.R.
2017-01-01
Cross-organizational collaborative decision-making involves a great deal of private information which companies are often reluctant to disclose, even when they need to analyze data collaboratively. The lack of effective privacy-preserving mechanisms for optimizing cross-organizational collaborative
The decision optimization of product development by considering the customer demand saturation
Directory of Open Access Journals (Sweden)
Qing-song Xing
2015-05-01
Full Text Available Purpose: The purpose of this paper is to analyze the impacts of over meeting customer demands on the product development process, which is on the basis of the quantitative model of customer demands, development cost and time. Then propose the corresponding product development optimization decision. Design/methodology/approach: First of all, investigate to obtain the customer demand information, and then quantify customer demands weights by using variation coefficient method. Secondly, analyses the relationship between customer demands and product development time and cost based on the quality function deployment and establish corresponding mathematical model. On this basis, put forward the concept of customer demand saturation and optimization decision method of product development, and then apply it in the notebook development process of a company. Finally, when customer demand is saturated, it also needs to prove the consistency of strengthening satisfies customer demands and high attention degree customer demands, and the stability of customer demand saturation under different parameters. Findings: The development cost and the time will rise sharply when over meeting the customer demand. On the basis of considering the customer demand saturation, the relationship between customer demand and development time cost is quantified and balanced. And also there is basically consistent between the sequence of meeting customer demands and customer demands survey results. Originality/value: The paper proposes a model of customer demand saturation. It proves the correctness and effectiveness on the product development decision method.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Barton, Michael
2015-10-24
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
Gaussian quadrature for splines via homotopy continuation: Rules for C2 cubic splines
Barton, Michael; Calo, Victor M.
2015-01-01
We introduce a new concept for generating optimal quadrature rules for splines. To generate an optimal quadrature rule in a given (target) spline space, we build an associated source space with known optimal quadrature and transfer the rule from the source space to the target one, while preserving the number of quadrature points and therefore optimality. The quadrature nodes and weights are, considered as a higher-dimensional point, a zero of a particular system of polynomial equations. As the space is continuously deformed by changing the source knot vector, the quadrature rule gets updated using polynomial homotopy continuation. For example, starting with C1C1 cubic splines with uniform knot sequences, we demonstrate the methodology by deriving the optimal rules for uniform C2C2 cubic spline spaces where the rule was only conjectured to date. We validate our algorithm by showing that the resulting quadrature rule is independent of the path chosen between the target and the source knot vectors as well as the source rule chosen.
R. Venkata Rao
2012-01-01
A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...
High Level Rule Modeling Language for Airline Crew Pairing
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
2011-09-01
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
Barton, Michael
2016-07-21
We introduce Gaussian quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. By definition, these spaces are of even degrees. The optimal quadrature rules we recently derived (Bartoň and Calo, 2016) act on spaces of the smallest odd degrees and, therefore, are still slightly sub-optimal. In this work, we derive optimal rules directly for even-degree spaces and therefore further improve our recent result. We use optimal quadrature rules for spaces over two elements as elementary building blocks and use recursively the homotopy continuation concept described in Bartoň and Calo (2016) to derive optimal rules for arbitrary admissible numbers of elements.We demonstrate the proposed methodology on relevant examples, where we derive optimal rules for various even-degree spline spaces. We also discuss convergence of our rules to their asymptotic counterparts, these are the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains.
Directory of Open Access Journals (Sweden)
Xia Lei
2010-12-01
Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.
How Family Status and Social Security Claiming Options Shape Optimal Life Cycle Portfolios.
Hubener, Andreas; Maurer, Raimond; Mitchell, Olivia S
2016-04-01
We show how optimal household decisions regarding work, retirement, saving, portfolio allocations, and life insurance are shaped by the complex financial options embedded in U.S. Social Security rules and uncertain family transitions. Our life cycle model predicts sharp consumption drops on retirement, an age-62 peak in claiming rates, and earlier claiming by wives versus husbands and single women. Moreover, life insurance is mainly purchased on men's lives. Our model, which takes Social Security rules seriously, generates wealth and retirement outcomes that are more consistent with the data, in contrast to earlier and less realistic models.
Zhang, Dezhi; Li, Shuangyan; Qin, Jin
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.
Directory of Open Access Journals (Sweden)
Dezhi Zhang
2014-01-01
Full Text Available This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users’ demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators’ service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.
Zhang, Dezhi; Li, Shuangyan
2014-01-01
This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209
Optimizing perioperative decision making: improved information for clinical workflow planning.
Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph
2012-01-01
Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.
International Nuclear Information System (INIS)
Apparigliato, R.
2008-06-01
In this Phd, we focus on the problem of weekly risk management in electric production. In the first part of this work, we investigate how to take into account stochastic inflows in the optimal management of a hydraulic valley. Our model is based on robust optimization and linear decision rules. A validation procedure based on simulation over random scenarios shows that we are able to postpone constraints violations of volume at very low cost. The second part deals with the problem of active management of electrical power margin, defined as the difference between the total offer and the total demand, considering the different random parameters which affect the electrical system. The objective is to determine optimal solutions to be taken in order to satisfy the demand in 99% of the cases. In that purpose, we propose a new open-looped formulation, based on the stochastic process of power margin and on the use of probabilistic constraints. To be able to solve this problem, we generate power margin's scenarios using more realistic methods than those used in exploitation. At last, a closed-loop approach, based on the heuristic 'Stochastic Programming with Step Decision Rules', introduced by Thenie and Vial, is studied. First results are quite promising in comparison with the opened-loop ones. (author)
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-09-01
The enclosed document describes a conceptual decision tool (hereinafter, Tool) for determining applicability of and for optimizing air sparging systems. The Tool was developed by a multi-disciplinary team of internationally recognized experts in air sparging technology, lead by a group of project and task managers at Parsons Engineering Science, Inc. (Parsons ES). The team included Mr. Douglas Downey and Dr. Robert Hinchee of Parsons ES, Dr. Paul Johnson of Arizona State University, Dr. Richard Johnson of Oregon Graduate Institute, and Mr. Michael Marley of Envirogen, Inc. User Community Panel Review was coordinated by Dr. Robert Siegrist of Colorado School of Mines (also of Oak Ridge National Laboratory) and Dr. Thomas Brouns of Battelle/Pacific Northwest Laboratory. The Tool is intended to provide guidance to field practitioners and environmental managers for evaluating the applicability and optimization of air sparging as remedial action technique.
Directory of Open Access Journals (Sweden)
Sacha eBourgeois-Gironde
2012-09-01
Full Text Available The aim of this paper is to assess the relevance of methodological transfers from behavioral ecology to experimental economics with respect to the elicitation of intertemporal preferences. More precisely our discussion will stem from the analysis of Stephens and Anderson’s (2001 seminal article. In their study with blue jays they document that foraging behavior typically implements short sighted choice rules which are beneficial in the long-run. Such long term profitability of short-sighted behavior cannot be evidenced when using a self-control paradigm (one which contrasts in a binary way sooner smaller and later larger payoffs but becomes apparent when ecological patch-paradigms (replicating economic situations in which the main trade-off consists in staying on a food patch or leaving for another patch are implemented. We transfer this methodology in view of contrasting foraging strategies and self-control in human intertemporal choices.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Minimization of decision tree depth for multi-label decision tables
Azad, Mohammad
2014-10-01
In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.
Minimization of decision tree depth for multi-label decision tables
Azad, Mohammad; Moshkov, Mikhail
2014-01-01
In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.
Estimation of power lithium-ion battery SOC based on fuzzy optimal decision
He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin
2018-06-01
In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.
Justification, optimization and decision-aiding in existing exposure situations
International Nuclear Information System (INIS)
Hedemann-Jensen, Per
2004-01-01
The existing ICRP system of radiological protection from 1990 (ICRP Publication 60) can be seen as a binary or dual-line system dealing with protection in exposure situations categorized as either practices or interventions. The distinction between practices and interventions is summarized in the paper with focus on some of the problems experienced in making such a distinction. The protection principles within the existing system of protection are presented with emphasis on the application to de facto or existing exposure situations. Decision on countermeasures to mitigate the consequences of existing exposure situations such as nuclear or radiological accidents and naturally occurring exposure situations include factors or attributes describing benefits from the countermeasure and those describing harm. Some of these attributes are discussed and the general process of justification of intervention and optimization of protection arriving at generic reference levels for implementing protective measures is presented. In addition, the role of radiological protection professionals and other stakeholders in the decision-making process is discussed. Special attention is given to the question whether radiological protection should form only one of many decision-aiding inputs to a broader societal decision-making process or whether societal aspects should be fully integrated into the radiological protection framework. The concepts of practices and interventions, however logical they are, have created some confusion when applied to protection of the public following a nuclear or radiological accident. These problems may be solved in a new set of general ICRP recommendations on radiological protection, which are anticipated to supersede Publication 60 in 2005. The evolution of the basic ICRP principles for radiological protection in existing exposure situations into a new set of ICRP recommendations is briefly discussed based upon the various material that has been presented
Body, Richard; Burrows, Gillian; Carley, Simon; Lewis, Philip S
2015-10-01
The Manchester Acute Coronary Syndromes (MACS) decision rule may enable acute coronary syndromes to be immediately 'ruled in' or 'ruled out' in the emergency department. The rule incorporates heart-type fatty acid binding protein (h-FABP) and high sensitivity troponin T levels. The rule was previously validated using a semiautomated h-FABP assay that was not practical for clinical implementation. We aimed to validate the rule with an automated h-FABP assay that could be used clinically. In this prospective diagnostic cohort study we included patients presenting to the emergency department with suspected cardiac chest pain. Serum drawn on arrival was tested for h-FABP using an automated immunoturbidimetric assay (Randox) and high sensitivity troponin T (Roche). The primary outcome, a diagnosis of acute myocardial infarction (AMI), was adjudicated based on 12 h troponin testing. A secondary outcome, major adverse cardiac events (MACE; death, AMI, revascularisation or new coronary stenosis), was determined at 30 days. Of the 456 patients included, 78 (17.1%) had AMI and 97 (21.3%) developed MACE. Using the automated h-FABP assay, the MACS rule had the same C-statistic for MACE as the original rule (0.91; 95% CI 0.88 to 0.92). 18.9% of patients were identified as 'very low risk' and thus eligible for immediate discharge with no missed AMIs and a 2.3% incidence of MACE (n=2, both coronary stenoses). 11.1% of patients were classed as 'high-risk' and had a 92.0% incidence of MACE. Our findings validate the performance of a refined MACS rule incorporating an automated h-FABP assay, facilitating use in clinical settings. The effectiveness of this refined rule should be verified in an interventional trial prior to implementation. UK CRN 8376. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Roy, S. G.; Gold, A.; Uchida, E.; McGreavy, B.; Smith, S. M.; Wilson, K.; Blachly, B.; Newcomb, A.; Hart, D.; Gardner, K.
2017-12-01
Dam removal has become a cornerstone of environmental restoration practice in the United States. One outcome of dam removal that has received positive attention is restored access to historic habitat for sea-run fisheries, providing a crucial gain in ecosystem resilience. But dams also provide stakeholders with valuable services, and uncertain socio-ecological outcomes can arise if there is not careful consideration of the basin scale trade offs caused by dam removal. In addition to fisheries, dam removals can significantly affect landscape nutrient flux, municipal water storage, recreational use of lakes and rivers, property values, hydroelectricity generation, the cultural meaning of dams, and many other river-based ecosystem services. We use a production possibility frontiers approach to explore dam decision scenarios and opportunities for trading between ecosystem services that are positively or negatively affected by dam removal in New England. Scenarios that provide efficient trade off potentials are identified using a multiobjective genetic algorithm. Our results suggest that for many river systems, there is a significant potential to increase the value of fisheries and other ecosystem services with minimal dam removals, and further increases are possible by including decisions related to dam operations and physical modifications. Run-of-river dams located near the head of tide are often found to be optimal for removal due to low hydroelectric capacity and high impact on fisheries. Conversely, dams with large impoundments near a river's headwaters can be less optimal for dam removal because their value as nitrogen sinks often outweighs the potential value for fisheries. Hydropower capacity is negatively impacted by dam removal but there are opportunities to meet or exceed lost capacity by upgrading preserved hydropower dams. Improving fish passage facilities for dams that are critical for safety or water storage can also reduce impacts on fisheries. Our
On the Hierarchy of Functioning Rules in Distributed Computing
Bui , Alain; Bui , Marc; Lavault , Christian
1999-01-01
International audience; In previous papers, we used a Markovian model to determine the optimal functioning rules of a distributed system in various settings. Searching optimal functioning rules amounts to solve an optimization problem under constraints. The hierarchy of solutions arising from the above problem is called the “first order hierarchy”, and may possibly yield equivalent solutions. The present paper emphasizes a specific technique for deciding between two equivalent solutions, whic...
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
Zsolnai, László
2011-01-01
The self-centeredness of modern organizations leads to environmental destruction and human deprivation. The principle of responsibility developed by Hans Jonas requires caring for the beings affected by our decisions and actions. Ethical decision-making creates a synthesis of reverence for ethical norms, rationality in goal achievement, and respect for the stakeholders. The maximin rule selects the "least worst alternative" in the multidimensional decision space of deontologica...
Improving the anesthetic process by a fuzzy rule based medical decision system.
Mendez, Juan Albino; Leon, Ana; Marrero, Ayoze; Gonzalez-Cava, Jose M; Reboso, Jose Antonio; Estevez, Jose Ignacio; Gomez-Gonzalez, José F
2018-01-01
The main objective of this research is the design and implementation of a new fuzzy logic tool for automatic drug delivery in patients undergoing general anesthesia. The aim is to adjust the drug dose to the real patient needs using heuristic knowledge provided by clinicians. A two-level computer decision system is proposed. The idea is to release the clinician from routine tasks so that he can focus on other variables of the patient. The controller uses the Bispectral Index (BIS) to assess the hypnotic state of the patient. Fuzzy controller was included in a closed-loop system to reach the BIS target and reject disturbances. BIS was measured using a BIS VISTA monitor, a device capable of calculating the hypnosis level of the patient through EEG information. An infusion pump with propofol 1% is used to supply the drug to the patient. The inputs to the fuzzy inference system are BIS error and BIS rate. The output is infusion rate increment. The mapping of the input information and the appropriate output is given by a rule-base based on knowledge of clinicians. To evaluate the performance of the fuzzy closed-loop system proposed, an observational study was carried out. Eighty one patients scheduled for ambulatory surgery were randomly distributed in 2 groups: one group using a fuzzy logic based closed-loop system (FCL) to automate the administration of propofol (42 cases); the second group using manual delivering of the drug (39 cases). In both groups, the BIS target was 50. The FCL, designed with intuitive logic rules based on the clinician experience, performed satisfactorily and outperformed the manual administration in patients in terms of accuracy through the maintenance stage. Copyright © 2018 Elsevier B.V. All rights reserved.
Fuzzy Sets-based Control Rules for Terminating Algorithms
Directory of Open Access Journals (Sweden)
Jose L. VERDEGAY
2002-01-01
Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.
DEFF Research Database (Denmark)
Yousefpour, Rasoul; Didion, Markus; Jacobsen, Jette Bredahl
2015-01-01
We apply Bayesian updating theory to model how decision-makers may gradually learn about climate change and make use of this information in making adaptive forest management decisions. We develop modelling steps to i) simulate observation of a multi-dimensional climate system, ii) apply updating...... production as a measure of management performance. The results illustrate the benefits of updating beliefs to eventually utilize the positive effects and limit negative impacts of climate change on forest biomass production. We find that adaptive decision-making results in switching decisions over time...... rules for beliefs about climate trends, iii) evaluate the performance of adaptive strategies, and iv) apply (i)–(iii) at the local and forest landscape scale to find and compare individual versus joint adaptive decisions. We search for optimal forest management decisions maximizing total biomass...
76 FR 24802 - Eliminating the Decision Review Board
2011-05-03
... 0960-AG80 Eliminating the Decision Review Board AGENCY: Social Security Administration. ACTION: Final rules. SUMMARY: We are eliminating the Decision Review Board (DRB) portions of part 405 of our rules...-level process. DSI also eliminated review by the Appeals Council, the final step in our administrative...
A complex systems approach to planning, optimization and decision making for energy networks
International Nuclear Information System (INIS)
Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim
2008-01-01
This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock
Wheeler, D.C.; Burstyn, I.; Vermeulen, R.; Yu, K.; Shortreed, S.M.; Pronk, A.; Stewart, P.A.; Colt, J.S.; Baris, D.; Karagas, M.R.; Schwenn, M.; Johnson, A.; Silverman, D.T.; Friesen, M.C.
2013-01-01
Objectives Evaluating occupational exposures in population-based case-control studies often requires exposure assessors to review each study participant's reported occupational information job-by-job to derive exposure estimates. Although such assessments likely have underlying decision rules, they
Directory of Open Access Journals (Sweden)
Yaolin Liu
Full Text Available Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders' preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning.
Monahan, Mark; Barton, Pelham; Taylor, Clare J; Roalfe, Andrea K; Hobbs, F D Richard; Cowie, Martin; Davis, Russell; Deeks, Jon; Mant, Jonathan; McCahon, Deborah; McDonagh, Theresa; Sutton, George; Tait, Lynda
2017-08-15
Detection and treatment of heart failure (HF) can improve quality of life and reduce premature mortality. However, symptoms such as breathlessness are common in primary care, have a variety of causes and not all patients require cardiac imaging. In systems where healthcare resources are limited, ensuring those patients who are likely to have HF undergo appropriate and timely investigation is vital. A decision tree was developed to assess the cost-effectiveness of using the MICE (Male, Infarction, Crepitations, Edema) decision rule compared to other diagnostic strategies to identify HF patients presenting to primary care. Data from REFER (REFer for EchocaRdiogram), a HF diagnostic accuracy study, was used to determine which patients received the correct diagnosis decision. The model adopted a UK National Health Service (NHS) perspective. The current recommended National Institute for Health and Care Excellence (NICE) guidelines for identifying patients with HF was the most cost-effective option with a cost of £4400 per quality adjusted life year (QALY) gained compared to a "do nothing" strategy. That is, patients presenting with symptoms suggestive of HF should be referred straight for echocardiography if they had a history of myocardial infarction or if their NT-proBNP level was ≥400pg/ml. The MICE rule was more expensive and less effective than the other comparators. Base-case results were robust to sensitivity analyses. This represents the first cost-utility analysis comparing HF diagnostic strategies for symptomatic patients. Current guidelines in England were the most cost-effective option for identifying patients for confirmatory HF diagnosis. The low number of HF with Reduced Ejection Fraction patients (12%) in the REFER patient population limited the benefits of early detection. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Decision criteria in PSA applications
International Nuclear Information System (INIS)
Holmberg, J.E.; Pulkkinen, U.; Rosqvist, T.; Simola, K.
2001-11-01
Along with the adoption of risk informed decision making principles, the need for formal probabilistic decision rule or criteria has been risen. However, there are many practical and theoretical problems in the application of probabilistic criteria. One has to think what is the proper way to apply probabilistic rules together with deterministic ones and how the criteria are weighted with respect to each other. In this report, we approach the above questions from the decision theoretic point of view. We give a short review of the most well known probabilistic criteria, and discuss examples of their use. We present a decision analytic framework for evaluating the criteria, and we analyse how the different criteria behave under incompleteness or uncertainty of the PSA model. As the conclusion of our analysis we give recommendations on the application of the criteria in different decision situations. (au)
Business rules formalisation for information systems
Directory of Open Access Journals (Sweden)
Ivana Rábová
2010-01-01
Full Text Available The article deals with relation business rules and business applications and describes a number of structures for support of information systems implementation and customization. Particular formats of structure are different according to different type of business rules. We arise from model of enterprise architecture that is a significant document of all what happens in business and serves for blueprint and facilitates of managers decisions. Most complicated part of enterprise architecture is business rule. When we gain its accurate formulation and when we achieve to formalize and to store business rule in special repository we can manage it actualize it and use it for many reasons. The article emphasizes formats of business rule formalization and its reference to business applications implementation.
Directory of Open Access Journals (Sweden)
Rui Zhang
2012-01-01
Full Text Available Most existing research on the job shop scheduling problem has been focused on the minimization of makespan (i.e., the completion time of the last job. However, in the fiercely competitive market nowadays, delivery punctuality is more important for maintaining a high service reputation. So in this paper, we aim at solving job shop scheduling problems with the total weighted tardiness objective. Several dispatching rules are adopted in the Giffler-Thompson algorithm for constructing active schedules. It is noticeable that the rule selections for scheduling consecutive operations are not mutually independent but actually interrelated. Under such circumstances, a probabilistic model-building genetic algorithm (PMBGA is proposed to optimize the sequence of selected rules. First, we use Bayesian networks to model the distribution characteristics of high-quality solutions in the population. Then, the new generation of individuals is produced by sampling the established Bayesian network. Finally, some elitist individuals are further improved by a special local search module based on parameter perturbation. The superiority of the proposed approach is verified by extensive computational experiments and comparisons.
Designing business rules for mediation : a process towards agent-mediated business coordination
Zhao, Z.; Dignum, M.V.; Dignum, F.P.M.
2008-01-01
Business process integration is a very active research area, in which mediation is one of the fundamental architectural choices. Mediators have difficulties to design mediation services that meet the requirements of the different stakeholders. Business rules play an important role in the decision process of mediation. In this paper, we analyze the role of business rules in the decision process, and use some examples to illustrate how business rules should be designed in order to help the deci...
DEFF Research Database (Denmark)
Andersen, Steffen; Harrison, Glenn W.; Lau, Morten Igel
2014-01-01
The most popular models of decision making use a single criterion to evaluate projects or lotteries. However, decision makers may actually consider multiple criteria when evaluating projects. We consider a dual criteria model from psychology. This model integrates the familiar tradeoffs between...... to the clear role that income thresholds play in such decision making, but does not rule out a role for tradeoffs between risk and utility or probability weighting....
Directory of Open Access Journals (Sweden)
Yanping Huang
Full Text Available A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs. We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1 computing beliefs (posterior distributions over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2 selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.
How Politics Shapes the Growth of Rules
DEFF Research Database (Denmark)
Jakobsen, Mads Leth Felsager; Mortensen, Peter Bjerre
2015-01-01
when, why, and how political factors shape changes in the stock of rules. Furthermore, we test these hypotheses on a unique, new data set based on all Danish primary legislation and administrative rules from 1989 to 2011 categorized into 20 different policy domains. The analysis shows......This article examines the impact of politics on governmental rule production. Traditionally, explanations of rule dynamics have focused on nonpolitical factors such as the self-evolvement of rules, environmental factors, and decision maker attributes. This article develops a set of hypotheses about...... that the traditional Weberian “rules breed rules” explanations must be supplemented with political explanations that take party ideology and changes in the political agenda into account. Moreover, the effect of political factors is indistinguishable across changes in primary laws and changes in administrative rules...
78 FR 36434 - Revisions to Rules of Practice
2013-06-18
... federal holidays, make grammatical corrections, and remove the reference to part-day holidays. Rule 3001... section, the following categories of persons are designated ``decision-making personnel'': (i) The.... The following categories of person are designated ``non-decision-making personnel'': (i) All...
The Bayesian statistical decision theory applied to the optimization of generating set maintenance
International Nuclear Information System (INIS)
Procaccia, H.; Cordier, R.; Muller, S.
1994-11-01
The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs
Using fuzzy rule-based knowledge model for optimum plating conditions search
Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.
2018-03-01
The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.
Study on optimized decision-making model of offshore wind power projects investment
Zhao, Tian; Yang, Shangdong; Gao, Guowei; Ma, Li
2018-02-01
China’s offshore wind energy is of great potential and plays an important role in promoting China’s energy structure adjustment. However, the current development of offshore wind power in China is inadequate, and is much less developed than that of onshore wind power. On the basis of considering all kinds of risks faced by offshore wind power development, an optimized model of offshore wind power investment decision is established in this paper by proposing the risk-benefit assessment method. To prove the practicability of this method in improving the selection of wind power projects, python programming is used to simulate the investment analysis of a large number of projects. Therefore, the paper is dedicated to provide decision-making support for the sound development of offshore wind power industry.
Schiebener, Johannes; Brand, Matthias
2017-06-01
Previous literature has explained older individuals' disadvantageous decision-making under ambiguity in the Iowa Gambling Task (IGT) by reduced emotional warning signals preceding decisions. We argue that age-related reductions in IGT performance may also be explained by reductions in certain cognitive abilities (reasoning, executive functions). In 210 participants (18-86 years), we found that the age-related variance on IGT performance occurred only in the last 60 trials. The effect was mediated by cognitive abilities and their relation with decision-making performance under risk with explicit rules (Game of Dice Task). Thus, reductions in cognitive functions in older age may be associated with both a reduced ability to gain explicit insight into the rules of the ambiguous decision situation and with failure to choose the less risky options consequently after the rules have been understood explicitly. Previous literature may have underestimated the relevance of cognitive functions for age-related decline in decision-making performance under ambiguity.
Simulation of Optimal Decision-Making Under the Impacts of Climate Change.
Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl
2017-07-01
Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.
Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica
2012-05-30
The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.
Detection of Stator Winding Fault in Induction Motor Using Fuzzy Logic with Optimal Rules
Directory of Open Access Journals (Sweden)
Hamid Fekri Azgomi
2013-04-01
Full Text Available Induction motors are critical components in many industrial processes. Therefore, swift, precise and reliable monitoring and fault detection systems are required to prevent any further damages. The online monitoring of induction motors has been becoming increasingly important. The main difficulty in this task is the lack of an accurate analytical model to describe a faulty motor. A fuzzy logic approach may help to diagnose traction motor faults. This paper presents a simple method for the detection of stator winding faults (which make up 38% of induction motor failures based on monitoring the line/terminal current amplitudes. In this method, fuzzy logic is used to make decisions about the stator motor condition. In fact, fuzzy logic is reminiscent of human thinking processes and natural language enabling decisions to be made based on vague information. The motor condition is described using linguistic variables. Fuzzy subsets and the corresponding membership functions describe stator current amplitudes. A knowledge base, comprising rule and data bases, is built to support the fuzzy inference. Simulation results are presented to verify the accuracy of motor’s fault detection and knowledge extraction feasibility. The preliminary results show that the proposed fuzzy approach can be used for accurate stator fault diagnosis.
International Nuclear Information System (INIS)
Knox, E.G.; Stewart, A.M.; Kneale, G.W.; Gilman, E.A.
1987-01-01
The authors argue against R.H. Mole's paper (Lancet, Dec. 12 1987), supporting the relaxation of ICRP recommendations and the DHSS decision to withdraw the 10 day rule in relation to diagnostic radiography for menstruating women, and draw attention to the recent refinement of estimates of the enhanced risk of childhood cancers, following diagnostic radiography during pregnancy. (U.K.)
Investment Decisions and Depreciation Choices under a Discretionary Tax Depreciation Rule
Wielhouwer, Jacco L.; Wiersma, E.
2017-01-01
Prior studies have shown limited impact of the US bonus depreciation rules on firm investments during economic downturns. In this article we study the effects of a set of more flexible rules – discretionary tax depreciation (DTD) – introduced in the Netherlands during the 2009–2011 economic crisis.
International Nuclear Information System (INIS)
Kim, Han Gon
1993-02-01
In pressurized water reactors, the fuel reloading problem has significant meaning in terms of both safety and economic aspects. Therefore the general problem of incore fuel management for a PWR consists of determining the fuel reloading policy for each cycle that minimize unit energy cost under the constraints imposed on various core parameters, e.g., a local power peaking factor and an assembly burnup. This is equivalent that a cycle length is maximized for a given energy cost under the various constraints. Existing optimization methods do not ensure the global optimum solution because of the essential limitation of their searching algorithms. They only find near optimal solutions. To solve this limitation, a hybrid artificial neural network system is developed for the optimal fuel loading pattern design using a fuzzy rule based system and an artificial neural networks. This system finds the patterns that P max is lower than the predetermined value and K eff is larger than the reference value. The back-propagation networks are developed to predict PWR core parameters. Reference PWR is an 121-assembly typical PWR. The local power peaking factor and the effective multiplication factor at BOC condition are predicted. To obtain target values of these two parameters, the QCC code are used. Using this code, 1000 training patterns are obtained, randomly. Two networks are constructed, one for P max and another for K eff Both of two networks have 21 input layer neurons, 18 output layer neurons, and 120 and 393 hidden layer neurons, respectively. A new learning algorithm is proposed. This is called the advanced adaptive learning algorithm. The weight change step size of this algorithm is optimally varied inversely proportional to the average difference between an actual output value and an ideal target value. This algorithm greatly enhances the convergence speed of a BPN. In case of P max prediction, 98% of the untrained patterns are predicted within 6% error, and in case
International Nuclear Information System (INIS)
Gvillo, D.; Ragheb, M.; Parker, M.; Swartz, S.
1987-01-01
A Production-Rule Analysis System is developed for Nuclear Plant Monitoring. The signals generated by the Zion-1 Plant are considered. A Situation-Assessment and Decision-Aid capability is provided for monitoring the integrity of the Plant Radiation, the Reactor Coolant, the Fuel Clad, and the Containment Systems. A total of 41 signals are currently fed as facts to an Inference Engine functioning in the backward-chaining mode and built along the same structure as the E-Mycin system. The Goal-Tree constituting the Knowledge Base was generated using a representation in the form of Fault Trees deduced from plant procedures information. The system is constructed in support of the Data Analysis and Emergency Preparedness tasks at the Illinois Radiological Emergency Assessment Center (REAC)
Gvillo, D.; Ragheb, M.; Parker, M.; Swartz, S.
1987-05-01
A Production-Rule Analysis System is developed for Nuclear Plant Monitoring. The signals generated by the Zion-1 Plant are considered. A Situation-Assessment and Decision-Aid capability is provided for monitoring the integrity of the Plant Radiation, the Reactor Coolant, the Fuel Clad, and the Containment Systems. A total of 41 signals are currently fed as facts to an Inference Engine functioning in the backward-chaining mode and built along the same structure as the E-Mycin system. The Goal-Tree constituting the Knowledge Base was generated using a representation in the form of Fault Trees deduced from plant procedures information. The system is constructed in support of the Data Analysis and Emergency Preparedness tasks at the Illinois Radiological Emergency Assessment Center (REAC).
A Decision Support Framework for Automated Screening of Diabetic Retinopathy
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available The early signs of diabetic retinopathy (DR are depicted by microaneurysms among other signs. A prompt diagnosis when the disease is at the early stage can help prevent irreversible damages to the diabetic eye. In this paper, we propose a decision support system (DSS for automated screening of early signs of diabetic retinopathy. Classification schemes for deducing the presence or absence of DR are developed and tested. The detection rule is based on binary-hypothesis testing problem which simplifies the problem to yes/no decisions. An analysis of the performance of the Bayes optimality criteria applied to DR is also presented. The proposed DSS is evaluated on the real-world data. The results suggest that by biasing the classifier towards DR detection, it is possible to make the classifier achieve good sensitivity.
Weijermars, R.; Taylor, P.; Bahn, O.; Das, S.R.; Wei, Y.M.
2011-01-01
Organizational behavior and stakeholder processes continually influence energy strategy choices and decisions. Although theoretical optimizations can provide guidance for energy mix decisions from a pure physical systems engineering point of view, these solutions might not be optimal from a
Directory of Open Access Journals (Sweden)
Yuttapong Pleumpirom
2012-01-01
Full Text Available The purpose of this paper is to develop the multiobjective optimization model in order to evaluate suppliers for aircraft maintenance tasks, using goal programming. The authors have developed a two-step process. The model will firstly be used as a decision-support tool for managing demand, by using aircraft and flight schedules to evaluate and generate aircraft-maintenance requirements, including spare-part lists. Secondly, they develop a multiobjective optimization model by minimizing cost, minimizing lead time, and maximizing the quality under various constraints in the model. Finally, the model is implemented in the actual airline's case.
Optimal Decision Making Framework of an Electric Vehicle Aggregator in Future and Pool markets
DEFF Research Database (Denmark)
Rashidizadeh-Kermani, Homa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad
2018-01-01
An electric vehicle (EV) aggregator, as an agent between power producers and EV owners, participates in the future and pool market to supply EVs’ requirement. Because of uncertain nature of pool prices and EVs’ behavior, this paper proposed a two stage scenario-based model to obtain optimal decis...... electricity markets, a sensitivity analysis over risk factor is performed. The numerical results demonstrate that with the application of the proposed model, the aggregator can supply EVs with lower purchases from markets....... decision making of an EV aggregator. To deal with mentioned uncertainties, the aggregator’s risk aversion is applied using conditional value at risk (CVaR) method in the proposed model. The proposed two stage risk-constrained decision making problem is applied to maximize EV aggregator’s expected profit...... in an uncertain environment. The aggregator can participate in the future and pool market to buy required energy of EVs and offer optimal charge/discharge prices to the EV owners. In this model, in order to assess the effects of EVs owners’ reaction to the aggregator’s offered prices on the purchases from...
International Nuclear Information System (INIS)
Lu, Zhijian; Shao, Shuai
2016-01-01
Highlights: • An ESCO optimal decision model considering governmental subsidies is proposed. • Optimal price and performance level are deduced via a two-stage model. • Demand, profit, and performance level increase with increasing subsidies. • ESCO’s market strategy should firstly focus on high energy consumption industries. • Governmental subsidies standard in different industries should be differentiated. - Abstract: Government subsidies generally play a crucial role in pricing and the choice of performance levels in Energy Performance Contracting (EPC). However, the existing studies pay little attention to how the Energy Service Company (ESCO) prices and chooses performance levels for EPC with government subsidies. To fill such a gap, we propose a joint optimal decision model of pricing and performance level in EPC considering government subsidies. The optimization of the model is achieved via a two-stage process. At the first stage, given a performance level, ESCOs choose the best price; and at the second stage, ESCOs choose the optimal performance level for the optimal price. Furthermore, we carry out a numerical analysis to illuminate such an optimal decision mechanism. The results show that both price sensitivity and performance level sensitivity have significant effects on the choice of performance levels with government subsidies. Government subsidies can induce higher performance levels of EPC, the demand for EPC, and the profit of ESCO. We suggest that ESCO’s market strategy should firstly focus on high energy consumption industries with government subsidies and that government subsidies standard adopted in different industries should be differentiated according to the market characteristics and energy efficiency levels of various industries.
CSIR Research Space (South Africa)
Greeff, M
2010-09-01
Full Text Available Decision making - with the goal of finding the optimal solution - is an important part of modern life. For example: In the control room of an airport, the goals or objectives are to minimise the risk of airplanes colliding, minimise the time that a...
Optimal Modeling of Wireless LANs: A Decision-Making Multiobjective Approach
Directory of Open Access Journals (Sweden)
Tomás de Jesús Mateo Sanguino
2018-01-01
Full Text Available Communication infrastructure planning is a critical design task that typically requires handling complex concepts on networking aimed at optimizing performance and resources, thus demanding high analytical and problem-solving skills to engineers. To reduce this gap, this paper describes an optimization algorithm—based on evolutionary strategy—created as an aid for decision-making prior to the real deployment of wireless LANs. The developed algorithm allows automating the design process, traditionally handmade by network technicians, in order to save time and cost by improving the WLAN arrangement. To this end, we implemented a multiobjective genetic algorithm (MOGA with the purpose of meeting two simultaneous design objectives, namely, to minimize the number of APs while maximizing the coverage signal over a whole planning area. Such approach provides efficient and scalable solutions closer to the best network design, so that we integrated the developed algorithm into an engineering tool with the goal of modelling the behavior of WLANs in ICT infrastructures. Called WiFiSim, it allows the investigation of various complex issues concerning the design of IEEE 802.11-based WLANs, thereby facilitating design of the study and design and optimal deployment of wireless LANs through complete modelling software. As a result, we comparatively evaluated three target applications considering small, medium, and large scenarios with a previous approach developed, a monoobjective genetic algorithm.
Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica
2016-09-01
Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.
International Nuclear Information System (INIS)
Chamseddine, Abbas; Theilliol, Didier; Sadeghzadeh, Iman; Zhang, Youmin; Weber, Philippe
2014-01-01
This paper addresses the problem of optimal reliability in over-actuated systems. Overloading an actuator decreases its overall lifetime and reduces its average performance over a long time. Therefore, performance and reliability are two conflicting requirements. While appropriate reliability is related to average loads, good performance is related to fast response and sufficient loads generated by actuators. Actuator redundancy allows us to address both performance and reliability at the same time by properly allocating desired loads among redundant actuators. The main contribution of this paper is the on-line optimization of the overall plant reliability according to performance objective using an MIT (Massachusetts Institute of Technology) rule-based method. The effectiveness of the proposed method is illustrated through an experimental application to an octocopter helicopter testbed
Minimizing size of decision trees for multi-label decision tables
Azad, Mohammad
2014-09-29
We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).
Minimizing size of decision trees for multi-label decision tables
Azad, Mohammad; Moshkov, Mikhail
2014-01-01
We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).
Hard and soft sub-time-optimal controllers for a mechanical system with uncertain mass
DEFF Research Database (Denmark)
Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.
2004-01-01
An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....
Hard and soft Sub-Time-Optimal Controllers for a Mechanical System with Uncertain Mass
DEFF Research Database (Denmark)
Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.
2005-01-01
An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....
Decision-making methodology of optimal shielding materials by using fuzzy linear programming
International Nuclear Information System (INIS)
Kanai, Y.; Miura, T.; Hirao, Y.
2000-01-01
The main purpose of our studies are to select materials and determine the ratio of constituent materials as the first stage of optimum shielding design to suit the individual requirements of nuclear reactors, reprocessing facilities, casks for shipping spent fuel, etc. The parameters of the shield optimization are cost, space, weight and some shielding properties such as activation rates or individual irradiation and cooling time, and total dose rate for neutrons (including secondary gamma ray) and for primary gamma ray. Using conventional two-valued logic (i.e. crisp) approaches, huge combination calculations are needed to identify suitable materials for optimum shielding design. Also, re-computation is required for minor changes, as the approach does not react sensitively to the computation result. Present approach using a fuzzy linear programming method is much of the decision-making toward the satisfying solution might take place in fuzzy environment. And it can quickly and easily provide a guiding principle of optimal selection of shielding materials under the above-mentioned conditions. The possibility or reducing radiation effects by optimizing the ratio of constituent materials is investigated. (author)
Optimizing human-system interface automation design based on a skill-rule-knowledge framework
International Nuclear Information System (INIS)
Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.
2010-01-01
This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.
The Theory of Optimal Taxation
DEFF Research Database (Denmark)
Sørensen, Peter Birch
The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest...... that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity between optimal commodity tax rules and the rules for optimal source-based capital income taxation...
The theory of optimal taxation
DEFF Research Database (Denmark)
Sørensen, Peter Birch
2007-01-01
The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest...... that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity between optimal commodity tax rules and the rules for optimal source-based capital income taxation...
International Nuclear Information System (INIS)
Feng, Yongqiang; Hung, TzuChen; Zhang, Yaning; Li, Bingxi; Yang, Jinfu; Shi, Yang
2015-01-01
Based on the thermoeconomic multi-objective optimization and decision makings, considering both exergy efficiency and LEC (levelized energy cost), the performance comparison of low-grade ORCs (organic Rankine cycles) using R245fa, pentane and their mixtures has been investigated. The effects of mass fraction of R245fa and four key parameters on the exergy efficiency and LEC are examined. The Pareto-optimal solutions are selected from the Pareto optimal frontier obtained by NSGA-II algorithm using three decision makings, including Shannon Entropy, LINMAP and TOPSIS. The deviation index is introduced to evaluate different decision makings. Research demonstrates that as the mass fraction of R245fa increasing, the exergy efficiency decreases first and then increases, while LEC presents a reverse trend. The optimum values from TOPSIS decision making are selected as the preferred Pareto-optimal solution for its lowest deviation index. The Pareto-optimal solutions for pentane, R245fa, and 0.5pentane/0.5R245fa in pairs of (exergy efficiency, LEC) are (0.5425, 0.104), (0.5502, 0.111), and (0.5212, 0.108), respectively. The mixture working fluids present lower thermodynamic performance and moderate economic performance than the pure working fluids under the Pareto optimization. - Highlights: • The thermoeconomic comparison between pure and mixture working fluids is investigated. • The Pareto-optimal solutions with bi-objective function using three decision makings are obtained. • The optimum values from TOPSIS decision making are selected as the preferred Pareto-optimal solution. • The mixture yields lower thermodynamic performance and moderate economic performance.
Code-specific learning rules improve action selection by populations of spiking neurons.
Friedrich, Johannes; Urbanczik, Robert; Senn, Walter
2014-08-01
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Sleep-dependent modulation of affectively guided decision-making.
Pace-Schott, Edward F; Nave, Genevieve; Morgan, Alexandra; Spencer, Rebecca M C
2012-02-01
A question of great interest in current sleep research is whether and how sleep might facilitate complex cognitive skills such as decision-making. The Iowa Gambling Task (IGT) was used to investigate effects of sleep on affect-guided decision-making. After a brief standardized preview of the IGT that was insufficient to learn its underlying rule, participants underwent a 12-h delay containing either a normal night's sleep (Sleep group; N = 28) or continuous daytime wake (Wake group; N = 26). Following the delay, both groups performed the full IGT. To control for circadian effects, two additional groups performed both the preview and the full task either in the morning (N = 17) or the evening (N = 21). In the IGT, four decks of cards were presented. Draws from two 'advantageous decks' yielded low play-money rewards, occasional low losses and, over multiple draws, a net gain. Draws from 'disadvantageous' decks yielded high rewards, occasional high losses and, over multiple draws, a net loss. Participants were instructed to win and avoid losing as much as possible, and better performance was defined as more advantageous draws. Relative to the wake group, the sleep group showed both superior behavioral outcome (more advantageous draws) and superior rule understanding (blindly judged from statements written at task completion). Neither measure differentiated the two control groups. These results illustrate a role of sleep in optimizing decision-making, a benefit that may be brought about by changes in underlying emotional or cognitive processes. © 2011 European Sleep Research Society.
Modeling reproductive decisions with simple heuristics
Directory of Open Access Journals (Sweden)
Peter Todd
2013-10-01
Full Text Available BACKGROUND Many of the reproductive decisions that humans make happen without much planning or forethought, arising instead through the use of simple choice rules or heuristics that involve relatively little information and processing. Nonetheless, these heuristic-guided decisions are typically beneficial, owing to humans' ecological rationality - the evolved fit between our constrained decision mechanisms and the adaptive problems we face. OBJECTIVE This paper reviews research on the ecological rationality of human decision making in the domain of reproduction, showing how fertility-related decisions are commonly made using various simple heuristics matched to the structure of the environment in which they are applied, rather than being made with information-hungry mechanisms based on optimization or rational economic choice. METHODS First, heuristics for sequential mate search are covered; these heuristics determine when to stop the process of mate search by deciding that a good-enough mate who is also mutually interested has been found, using a process of aspiration-level setting and assessing. These models are tested via computer simulation and comparison to demographic age-at-first-marriage data. Next, a heuristic process of feature-based mate comparison and choice is discussed, in which mate choices are determined by a simple process of feature-matching with relaxing standards over time. Parental investment heuristics used to divide resources among offspring are summarized. Finally, methods for testing the use of such mate choice heuristics in a specific population over time are then described.
International Nuclear Information System (INIS)
Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei
2016-01-01
Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.
Decision-Making Approach to Selecting Optimal Platform of Service Variants
Directory of Open Access Journals (Sweden)
Vladimir Modrak
2016-01-01
Full Text Available Nowadays, it is anticipated that service sector companies will be inspired to follow mass customization trends of industrial sector. However, services are more abstract than products and therefore concepts for mass customization in manufacturing domain cannot be transformed without a methodical change. This paper is focused on the development of a methodological framework to support decisions in a selection of optimal platform of service variants when compatibility problems between service options occurred. The approach is based on mutual relations between waste and constrained design space entropy. For this purpose, software for quantification of constrained and waste design space is developed. Practicability of the methodology is presented on a realistic case.
Cao, Qi
2016-01-01
Treatment selection based on average effects observed in entire target population masks variation among patients (heterogeneity) and may result in less than optimal decision making. Personalized medicine is a new and complex concept, which aims to improve health by offering more tailored and
Floresco, Stan B; Montes, David R; Tse, Maric M T; van Holstein, Mieke
2018-02-21
The nucleus accumbens (NAc) is a key node within corticolimbic circuitry for guiding action selection and cost/benefit decision making in situations involving reward uncertainty. Preclinical studies have typically assessed risk/reward decision making using assays where decisions are guided by internally generated representations of choice-outcome contingencies. Yet, real-life decisions are often influenced by external stimuli that inform about likelihoods of obtaining rewards. How different subregions of the NAc mediate decision making in such situations is unclear. Here, we used a novel assay colloquially termed the "Blackjack" task that models these types of situations. Male Long-Evans rats were trained to choose between one lever that always delivered a one-pellet reward and another that delivered four pellets with different probabilities [either 50% (good-odds) or 12.5% (poor-odds)], which were signaled by one of two auditory cues. Under control conditions, rats selected the large/risky option more often on good-odds versus poor-odds trials. Inactivation of the NAc core caused indiscriminate choice patterns. In contrast, NAc shell inactivation increased risky choice, more prominently on poor-odds trials. Additional experiments revealed that both subregions contribute to auditory conditional discrimination. NAc core or shell inactivation reduced Pavlovian approach elicited by an auditory CS+, yet shell inactivation also increased responding during presentation of a CS-. These data highlight distinct contributions for NAc subregions in decision making and reward seeking guided by discriminative stimuli. The core is crucial for implementation of conditional rules, whereas the shell refines reward seeking by mitigating the allure of larger, unlikely rewards and reducing expression of inappropriate or non-rewarded actions. SIGNIFICANCE STATEMENT Using external cues to guide decision making is crucial for adaptive behavior. Deficits in cue-guided behavior have been
Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje
2017-01-18
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple
Rules of Origin as Commercial Policy Instruments.
Falvey, Rod; Reed, Geoff
1997-01-01
This article examines the role of rules of origin as a commercial policy instrument that targets the input composition of imports. Using a three-country, partial equilibrium structure, we demonstrate conditions under which the imposition of a binding rule will be welfare improving for an importer facing competitive export suppliers. We further show that employing rules of origin in this way would be complementary to, rather than a substitute for, conventional optimal tariffs. Copyright Econom...
Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K
2011-02-15
This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.
Pereira, Robert
1999-01-01
This paper evaluates the performance of several popular technical trading rules applied to the Australian share market. The optimal trading rule parameter values over the in-sample period of 4/1/82 to 31/12/89 are found using a genetic algorithm. These optimal rules are then evaluated in terms of their forecasting ability and economic profitability during the out-of-sample period from 2/1/90 to the 31/12/97. The results indicate that the optimal rules outperform the benchmark given by a risk-...
Directory of Open Access Journals (Sweden)
Toly Chen
2012-01-01
Full Text Available A nonlinear programming and artificial neural network approach is presented in this study to optimize the performance of a job dispatching rule in a wafer fabrication factory. The proposed methodology fuses two existing rules and constructs a nonlinear programming model to choose the best values of parameters in the two rules by dynamically maximizing the standard deviation of the slack, which has been shown to benefit scheduling performance by several studies. In addition, a more effective approach is also applied to estimate the remaining cycle time of a job, which is empirically shown to be conducive to the scheduling performance. The efficacy of the proposed methodology was validated with a simulated case; evidence was found to support its effectiveness. We also suggested several directions in which it can be exploited in the future.
29 CFR 1905.41 - Summary decision.
2010-07-01
... OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 Summary Decisions § 1905.41 Summary decision. (a) No genuine issue... 29 Labor 5 2010-07-01 2010-07-01 false Summary decision. 1905.41 Section 1905.41 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF...
Wang, S. Y.; Ho, C. C.; Chang, L. C.
2017-12-01
The public use water in Hsinchu are mainly supplied from Baoshan Reservoir, Second Baoshan Reservoir, Yongheshan Reservoir and Longen Weir. However, the increasing water demand, caused by development of the Hsinchu Science and Industrial Park, results in supply stable water getting more difficult. For stabilize water supply in Hsinchu, the study applies long-term and short-term plans to fulfill the water shortage. Developing an efficient methodology to define a cost-effective action portfolio is an important task. Hence, the study develops a novel decision model, the Stochastic Programming with Recourse Decision Model (SPRDM), to estimate a cost-effective action portfolio. The first-stage of SPRDM determine the long-term action portfolio and the portfolio accompany recourse information (the probability for water shortage event). The second-stage of SPRDM optimize the cost-effective action portfolio in response to the recourse information. In order to consider the uncertainty of reservoir sediment and demand growth, the study set 9 scenarios comprise optimistic, most likely, and pessimistic reservoir sediment and demand growth. The results show the optimal action portfolio consist of FengTain Lake and Panlon Weir, Hsinchu Desalination Plant, Domestic and Industrial Water long-term plans, and Emergency Backup Well, Irrigation Water Transference, Preliminary Water Rationing, Advanced Water Rationing and Water Transport from Other Districts short-term plans. The minimum expected cost of optimal action portfolio is NT$1.1002 billion. The results can be used as a reference for decision making because the results have considered the uncertainty of varied hydrology, reservoir sediment, and water demand growth.
Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.
2015-12-01
The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.
Christoforou, Paraskevi S; Ashforth, Blake E
2015-01-01
We argue that the strength with which the organization communicates expectations regarding the appropriate emotional expression toward customers (i.e., explicitness of display rules) has an inverted U-shaped relationship with service delivery behaviors, customer satisfaction, and sales performance. Further, we argue that service organizations need a particular blend of explicitness of display rules and role discretion for the purpose of optimizing sales performance. As hypothesized, findings from 2 samples of salespeople suggest that either high or low explicitness of display rules impedes service delivery behaviors and sales performance, which peaks at moderate explicitness of display rules and high role discretion. The findings also suggest that the explicitness of display rules has a positive relationship with customer satisfaction. (c) 2015 APA, all rights reserved.
Scalable software architectures for decision support.
Musen, M A
1999-12-01
Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models
Magana-Mora, Arturo
2017-06-14
Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.
OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models
Magana-Mora, Arturo; Bajic, Vladimir B.
2017-01-01
Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.
Optimal decisions of countries with carbon tax and carbon tariff
Directory of Open Access Journals (Sweden)
Yumei Hou
2015-05-01
Full Text Available Purpose: Reducing carbon emission has been the core problem of controlling global warming and climate deterioration recently. This paper focuses on the optimal carbon taxation policy levied by countries and the impact on firms’ optimal production decisions. Design/methodology/approach: This paper uses a two-stage game theory model to analyze the impact of carbon tariff and tax. Numerical simulation is used to supplement the theoretical analysis. Findings: Results derived from the paper indicate that the demand in an unstable market is significantly affected by environmental damage level. Carbon tariff is a policy-oriented tax while the carbon tax is a market-oriented one. Comprehensive carbon taxation policy benefit developed countries and basic policy is more suitable for developing countries. Research limitations/implications: In this research, we do not consider random demand and asymmetric information, which may not well suited the reality. Originality/value: This work provides a different perspective in analyzing the impact of carbon tax and tariff. It is the first study to consider two consuming market and the strategic game between two countries. Different international status of countries considered in the paper is also a unique point.