WorldWideScience

Sample records for optimal decision rules

  1. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  2. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  3. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  4. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  5. Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions

    KAUST Repository

    Alsolami, Fawaz

    2016-04-25

    ‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important areas of applications are knowledge extraction and representation. The benefit of considering inhibitory rules is connected with the fact that in some situations they can describe more knowledge than the decision ones. Decision tables with many-valued decisions arise in combinatorial optimization, computational geometry, fault diagnosis, and especially under the processing of data sets. In this thesis, various examples of real-life problems are considered which help to understand the motivation of the investigation. We extend relatively simple results obtained earlier for decision rules over decision tables with many-valued decisions to the case of inhibitory rules. The behavior of Shannon functions (which characterize complexity of rule systems) is studied for finite and infinite information systems, for global and local approaches, and for decision and inhibitory rules. The extensions of dynamic programming for the study of decision rules over decision tables with single-valued decisions are generalized to the case of decision tables with many-valued decisions. These results are also extended to the case of inhibitory rules. As a result, we have algorithms (i) for multi-stage optimization of rules relative to such criteria as length or coverage, (ii) for counting the number of optimal rules, (iii) for construction of Pareto optimal points for bi-criteria optimization problems, (iv) for construction of graphs describing relationships between two cost functions, and (v) for construction of graphs describing relationships between cost and accuracy of rules. The applications of created tools include comparison (based on information about Pareto optimal points) of greedy heuristics for bi-criteria optimization of rules

  6. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  7. Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions

    KAUST Repository

    Alsolami, Fawaz

    2016-01-01

    ‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important

  8. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  9. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  10. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha

    2012-12-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  11. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  12. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  13. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  14. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  15. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  16. Optimization of decision rule complexity for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad

    2013-10-01

    We describe new heuristics to construct decision rules for decision tables with many-valued decisions from the point of view of length and coverage which are enough good. We use statistical test to find leaders among the heuristics. After that, we compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems to be a promising result. © 2013 IEEE.

  17. Optimization of inhibitory decision rules relative to length and coverage

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation

  18. Optimization of decision rule complexity for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems

  19. Optimization of inhibitory decision rules relative to length and coverage

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation "attribute = value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. © 2012 Springer-Verlag.

  20. Optimization of β-decision rules relative to number of misclassifications

    KAUST Repository

    Zielosko, Beata

    2012-01-01

    In the paper, we present an algorithm for optimization of approximate decision rules relative to the number of misclassifications. The considered algorithm is based on extensions of dynamic programming and constructs a directed acyclic graph Δ β (T). Based on this graph we can describe the whole set of so-called irredundant β-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 Springer-Verlag.

  1. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-08-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  2. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz; Amin, Talha M.; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  3. Decision tables and rule engines in organ allocation systems for optimal transparency and flexibility.

    Science.gov (United States)

    Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J

    2011-05-01

    Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.

  4. Decision tables and rule engines in organ allocation systems for optimal transparency and flexibility

    NARCIS (Netherlands)

    Schaafsma, M.; Deijl, W. van der; Smits, J.M.M.; Rahmel, A.O.; Vries Robbé, P.F. de; Hoitsma, A.J.

    2011-01-01

    Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with

  5. Optimization of approximate decision rules relative to number of misclassifications: Comparison of greedy and dynamic programming approaches

    KAUST Repository

    Amin, Talha

    2013-01-01

    In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.

  6. Optimal offering and operating strategies for wind-storage systems with linear decision rules

    DEFF Research Database (Denmark)

    Ding, Huajie; Pinson, Pierre; Hu, Zechun

    2016-01-01

    The participation of wind farm-energy storage systems (WF-ESS) in electricity markets calls for an integrated view of day-ahead offering strategies and real-time operation policies. Such an integrated strategy is proposed here by co-optimizing offering at the day-ahead stage and operation policy...... to be used at the balancing stage. Linear decision rules are seen as a natural approach to model and optimize the real-time operation policy. These allow enhancing profits from balancing markets based on updated information on prices and wind power generation. Our integrated strategies for WF...

  7. Optimal Decision Rules in Repeated Games Where Players Infer an Opponent’s Mind via Simplified Belief Calculation

    Directory of Open Access Journals (Sweden)

    Mitsuhiro Nakamura

    2016-07-01

    Full Text Available In strategic situations, humans infer the state of mind of others, e.g., emotions or intentions, adapting their behavior appropriately. Nonetheless, evolutionary studies of cooperation typically focus only on reaction norms, e.g., tit for tat, whereby individuals make their next decisions by only considering the observed outcome rather than focusing on their opponent’s state of mind. In this paper, we analyze repeated two-player games in which players explicitly infer their opponent’s unobservable state of mind. Using Markov decision processes, we investigate optimal decision rules and their performance in cooperation. The state-of-mind inference requires Bayesian belief calculations, which is computationally intensive. We therefore study two models in which players simplify these belief calculations. In Model 1, players adopt a heuristic to approximately infer their opponent’s state of mind, whereas in Model 2, players use information regarding their opponent’s previous state of mind, obtained from external evidence, e.g., emotional signals. We show that players in both models reach almost optimal behavior through commitment-like decision rules by which players are committed to selecting the same action regardless of their opponent’s behavior. These commitment-like decision rules can enhance or reduce cooperation depending on the opponent’s strategy.

  8. On the complexity of decision trees, the quasi-optimizer, and the power of heuristic rules

    NARCIS (Netherlands)

    Findler, N.V.; Leeuwen, J. van

    The power of certain heuristic rules is indicated by the relative reduction in the complexity of computations carried out, due to the use of the heuristics. A concept of complexity is needed to evaluate the performance of programs as they operate with a varying set of heuristic rules in use. We

  9. Decision rules for decision tables with many-valued decisions

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In the paper, authors presents a greedy algorithm for construction of exact and partial decision rules for decision tables with many-valued decisions. Exact decision rules can be \\'over-fitted\\', so instead of exact decision rules with many attributes, it is more appropriate to work with partial decision rules with smaller number of attributes. Based on results for set cover problem authors study bounds on accuracy of greedy algorithm for exact and partial decision rule construction, and complexity of the problem of minimization of decision rule length. © 2011 Springer-Verlag.

  10. Decision rules for decision tables with many-valued decisions

    KAUST Repository

    Chikalov, Igor; Zielosko, Beata

    2011-01-01

    In the paper, authors presents a greedy algorithm for construction of exact and partial decision rules for decision tables with many-valued decisions. Exact decision rules can be 'over-fitted', so instead of exact decision rules with many attributes

  11. Design and Analysis of Decision Rules via Dynamic Programming

    KAUST Repository

    Amin, Talha M.

    2017-01-01

    Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules

  12. Decision Mining Revisited – Discovering Overlapping Rules

    NARCIS (Netherlands)

    Mannhardt, F.; de Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.

    2016-01-01

    Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,

  13. Decision Mining Revisited - Discovering Overlapping Rules

    NARCIS (Netherlands)

    Mannhardt, F.; De Leoni, M.; Reijers, H.A.; van der Aalst, W.M.P.; Nurcan, S.; Soffer, P.; Bajec, M.; Eder, J.

    2016-01-01

    Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,

  14. Decision mining revisited - Discovering overlapping rules

    NARCIS (Netherlands)

    Mannhardt, Felix; De Leoni, Massimiliano; Reijers, Hajo A.; Van Der Aalst, Wil M P

    2016-01-01

    Decision mining enriches process models with rules underlying decisions in processes using historical process execution data. Choices between multiple activities are specified through rules defined over process data. Existing decision mining methods focus on discovering mutually-exclusive rules,

  15. Conformance Testing: Measurement Decision Rules

    Science.gov (United States)

    Mimbs, Scott M.

    2010-01-01

    The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement

  16. Rules of Thumb in Life-Cycle Saving Decisions

    OpenAIRE

    Winter, Joachim; Schlafmann, Kathrin; Rodepeter, Ralf

    2011-01-01

    We analyse life-cycle saving decisions when households use simple heuristics, or rules of thumb, rather than solve the underlying intertemporal optimization problem. We simulate life-cycle saving decisions using three simple rules and compute utility losses relative to the solution of the optimization problem. Our simulations suggest that utility losses induced by following simple decision rules are relatively low. Moreover, the two main saving motives re ected by the canonical life-cyc...

  17. Testing Decision Rules for Multiattribute Decision Making

    NARCIS (Netherlands)

    Seidl, C.; Traub, S.

    1996-01-01

    This paper investigates the existence of an editing phase and studies the com- pliance of subjects' behaviour with the most popular multiattribute decision rules. We observed that our data comply well with the existence of an editing phase, at least if we allow for a natural error rate of some 25%.

  18. Design and Analysis of Decision Rules via Dynamic Programming

    KAUST Repository

    Amin, Talha M.

    2017-04-24

    The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods. Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions. Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy.

  19. Do Group Decision Rules Affect Trust? A Laboratory Experiment on Group Decision Rules and Trust

    DEFF Research Database (Denmark)

    Nielsen, Julie Hassing

    2016-01-01

    Enhanced participation has been prescribed as the way forward for improving democratic decision making while generating positive attributes like trust. Yet we do not know the extent to which rules affect the outcome of decision making. This article investigates how different group decision rules......-hierarchical decision-making procedures enhance trust vis-à-vis other more hierarchical decision-making procedures....... affect group trust by testing three ideal types of decision rules (i.e., a Unilateral rule, a Representative rule and a 'Non-rule') in a laboratory experiment. The article shows significant differences between the three decision rules on trust after deliberation. Interestingly, however, it finds...

  20. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  1. WINE ADVISOR EXPERT SYSTEM USING DECISION RULES

    Directory of Open Access Journals (Sweden)

    Dinuca Elena Claudia

    2013-07-01

    Full Text Available In this article I focus on developing an expert system for advising the choice of wine that best matches a specific occasion. An expert system is a computer application that performs a task that would be performed by a human expert. The implementation is done using Delphi programming language. I used to represent the knowledge bases a set of rules. The rules are of type IF THEN ELSE rules, decision rules based on different important wine features.

  2. Optimization of Approximate Inhibitory Rules Relative to Number of Misclassifications

    KAUST Repository

    Alsolami, Fawaz

    2013-10-04

    In this work, we consider so-called nonredundant inhibitory rules, containing an expression “attribute:F value” on the right- hand side, for which the number of misclassifications is at most a threshold γ. We study a dynamic programming approach for description of the considered set of rules. This approach allows also the optimization of nonredundant inhibitory rules relative to the length and coverage. The aim of this paper is to investigate an additional possibility of optimization relative to the number of misclassifications. The results of experiments with decision tables from the UCI Machine Learning Repository show this additional optimization achieves a fewer misclassifications. Thus, the proposed optimization procedure is promising.

  3. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  4. Length and coverage of inhibitory decision rules

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.

  5. Decision rule classifiers for multi-label decision tables

    KAUST Repository

    Alsolami, Fawaz; Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail

    2014-01-01

    for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International

  6. Decision Analysis of Dynamic Spectrum Access Rules

    Energy Technology Data Exchange (ETDEWEB)

    Juan D. Deaton; Luiz A. DaSilva; Christian Wernz

    2011-12-01

    A current trend in spectrum regulation is to incorporate spectrum sharing through the design of spectrum access rules that support Dynamic Spectrum Access (DSA). This paper develops a decision-theoretic framework for regulators to assess the impacts of different decision rules on both primary and secondary operators. We analyze access rules based on sensing and exclusion areas, which in practice can be enforced through geolocation databases. Our results show that receiver-only sensing provides insufficient protection for primary and co-existing secondary users and overall low social welfare. On the other hand, using sensing information between the transmitter and receiver of a communication link, provides dramatic increases in system performance. The performance of using these link end points is relatively close to that of using many cooperative sensing nodes associated to the same access point and large link exclusion areas. These results are useful to regulators and network developers in understanding in developing rules for future DSA regulation.

  7. An overview of bipolar qualitative decision rules

    Science.gov (United States)

    Bonnefon, Jean-Francois; Dubois, Didier; Fargier, Hélène

    Making a good decision is often a matter of listing and comparing positive and negative arguments, as studies in cognitive psychology have shown. In such cases, the evaluation scale should be considered bipolar, that is, negative and positive values are explicitly distinguished. Generally, positive and negative features are evaluated separately, as done in Cumulative Prospect Theory. However, contrary to the latter framework that presupposes genuine numerical assessments, decisions are often made on the basis of an ordinal ranking of the pros and the cons, and focusing on the most salient features, i.e., the decision process is qualitative. In this paper, we report on a project aiming at characterizing several decision rules, based on possibilistic order of magnitude reasoning, and tailored for the joint handling of positive and negative affects, and at testing their empirical validity. The simplest rules can be viewed as extensions of the maximin and maximax criteria to the bipolar case and, like them, suffer from a lack of discrimination power. More decisive rules that refine them are also proposed. They account for both the principle of Pareto-efficiency and the notion of order of magnitude reasoning. The most decisive one uses a lexicographic ranking of the pros and cons. It comes down to a special case of Cumulative Prospect Theory, and subsumes the “Take the best” heuristic.

  8. Decision rule classifiers for multi-label decision tables

    KAUST Repository

    Alsolami, Fawaz

    2014-01-01

    Recently, multi-label classification problem has received significant attention in the research community. This paper is devoted to study the effect of the considered rule heuristic parameters on the generalization error. The results of experiments for decision tables from UCI Machine Learning Repository and KEEL Repository show that rule heuristics taking into account both coverage and uncertainty perform better than the strategies taking into account a single criterion. © 2014 Springer International Publishing.

  9. Simultaneous Optimization of Decisions Using a Linear Utility Function.

    Science.gov (United States)

    Vos, Hans J.

    1990-01-01

    An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)

  10. Comparison of Heuristics for Inhibitory Rule Optimization

    KAUST Repository

    Alsolami, Fawaz

    2014-09-13

    Knowledge representation and extraction are very important tasks in data mining. In this work, we proposed a variety of rule-based greedy algorithms that able to obtain knowledge contained in a given dataset as a series of inhibitory rules containing an expression “attribute ≠ value” on the right-hand side. The main goal of this paper is to determine based on rule characteristics, rule length and coverage, whether the proposed rule heuristics are statistically significantly different or not; if so, we aim to identify the best performing rule heuristics for minimization of rule length and maximization of rule coverage. Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.

  11. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  12. Optimal short-sighted rules

    Directory of Open Access Journals (Sweden)

    Sacha eBourgeois-Gironde

    2012-09-01

    Full Text Available The aim of this paper is to assess the relevance of methodological transfers from behavioral ecology to experimental economics with respect to the elicitation of intertemporal preferences. More precisely our discussion will stem from the analysis of Stephens and Anderson’s (2001 seminal article. In their study with blue jays they document that foraging behavior typically implements short sighted choice rules which are beneficial in the long-run. Such long term profitability of short-sighted behavior cannot be evidenced when using a self-control paradigm (one which contrasts in a binary way sooner smaller and later larger payoffs but becomes apparent when ecological patch-paradigms (replicating economic situations in which the main trade-off consists in staying on a food patch or leaving for another patch are implemented. We transfer this methodology in view of contrasting foraging strategies and self-control in human intertemporal choices.

  13. Comparison of Heuristics for Inhibitory Rule Optimization

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail

    2014-01-01

    Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.

  14. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  15. Unrealistic optimism and decision making

    Directory of Open Access Journals (Sweden)

    Božović Bojana

    2009-01-01

    Full Text Available One of the leading descriptive theories of decision-making under risk, Tversky & Kahneman's Prospect theory, reveals that normative explanation of decisionmaking, based only on principle of maximizing outcomes expected utility, is unsustainable. It also underlines the effect of alternative factors on decision-making. Framing effect relates to an influence that verbal formulation of outcomes has on choosing between certain and risky outcomes; in negative frame people tend to be risk seeking, whereas in positive frame people express risk averse tendencies. Individual decisions are not based on objective probabilities of outcomes, but on subjective probabilities that depend on outcome desirability. Unrealistically pessimistic subjects assign lower probabilities (than the group average to the desired outcomes, while unrealistically optimistic subjects assign higher probabilities (than the group average to the desired outcomes. Experiment was conducted in order to test the presumption that there's a relation between unrealistic optimism and decision-making under risk. We expected optimists to be risk seeking, and pessimist to be risk averse. We also expected such cognitive tendencies, if they should become manifest, to be framing effect resistant. Unrealistic optimism scale was applied, followed by the questionnaire composed of tasks of decision-making under risk. Results within the whole sample, and results of afterwards extracted groups of pessimists and optimists both revealed dominant risk seeking tendency that is resistant to the influence of subjective probabilities as well as to the influence of frame in which the outcome is presented.

  16. Rule Optimization monthly reservoir operation Salvajina

    International Nuclear Information System (INIS)

    Sandoval Garcia, Maria Clemencia; Santacruz Salazar, Santiago; Ramirez Callejas, Carlos A

    2007-01-01

    In the present study a model was designed for the optimization of the rule for monthly operation of the Salvajina dam (Colombia) based in the technology) of dynamic programming. The model maximizes the benefits for electric power generation, ensuring at the same time flood regulation in winter and pollution relief during the summer. For the optimization of the rule of operation, it was necessary to define the levels and volumes of reserve and holding required for the control of flood zones in the Cauca river and to provide an effluent minimal flow and assure a daily flow at the Juanchito station (located 141 km downstream from the dam) of the Cauca river, 90 % of the time during the most critical summer periods.

  17. Relationships between length and coverage of decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2014-01-01

    The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.

  18. Relationships between length and coverage of decision rules

    KAUST Repository

    Amin, Talha

    2014-02-14

    The paper describes a new tool for study relationships between length and coverage of exact decision rules. This tool is based on dynamic programming approach. We also present results of experiments with decision tables from UCI Machine Learning Repository.

  19. Amsterdam wrist rules: A clinical decision aid

    Directory of Open Access Journals (Sweden)

    Bentohami Abdelali

    2011-10-01

    Full Text Available Abstract Background Acute trauma of the wrist is one of the most frequent reasons for visiting the Emergency Department. These patients are routinely referred for radiological examination. Most X-rays however, do not reveal any fractures. A clinical decision rule determining the need for X-rays in patients with acute wrist trauma may help to percolate and select patients with fractures. Methods/Design This study will be a multi-center observational diagnostic study in which the data will be collected cross-sectionally. The study population will consist of all consecutive adult patients (≥18 years presenting with acute wrist trauma at the Emergency Department in the participating hospitals. This research comprises two components: one study will be conducted to determine which clinical parameters are predictive for the presence of a distal radius fracture in adult patients presenting to the Emergency Department following acute wrist trauma. These clinical parameters are defined by trauma-mechanism, physical examination, and functional testing. This data will be collected in two of the three participating hospitals and will be assessed by using logistic regression modelling to estimate the regression coefficients after which a reduced model will be created by means of a log likelihood ratio test. The accuracy of the model will be estimated by a goodness of fit test and an ROC curve. The final model will be validated internally through bootstrapping and by shrinking it, an adjusted model will be generated. In the second component of this study, the developed prediction model will be validated in a new dataset consisting of a population of patients from the third hospital. If necessary, the model will be calibrated using the data from the validation study. Discussion Wrist trauma is frequently encountered at the Emergency Department. However, to this date, no decision rule regarding this type of trauma has been created. Ideally, radiographs are

  20. Portfolio theory and the alternative decision rule of cost-effectiveness analysis: theoretical and practical considerations.

    Science.gov (United States)

    Sendi, Pedram; Al, Maiwenn J; Gafni, Amiram; Birch, Stephen

    2004-05-01

    Bridges and Terris (Soc. Sci. Med. (2004)) critique our paper on the alternative decision rule of economic evaluation in the presence of uncertainty and constrained resources within the context of a portfolio of health care programs (Sendi et al. Soc. Sci. Med. 57 (2003) 2207). They argue that by not adopting a formal portfolio theory approach we overlook the optimal solution. We show that these arguments stem from a fundamental misunderstanding of the alternative decision rule of economic evaluation. In particular, the portfolio theory approach advocated by Bridges and Terris is based on the same theoretical assumptions that the alternative decision rule set out to relax. Moreover, Bridges and Terris acknowledge that the proposed portfolio theory approach may not identify the optimal solution to resource allocation problems. Hence, it provides neither theoretical nor practical improvements to the proposed alternative decision rule.

  1. Decision rules and group rationality: cognitive gain or standstill?

    Science.gov (United States)

    Curşeu, Petru Lucian; Jansen, Rob J G; Chappin, Maryse M H

    2013-01-01

    Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group), and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members) than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.

  2. Decision rules and group rationality: cognitive gain or standstill?

    Directory of Open Access Journals (Sweden)

    Petru Lucian Curşeu

    Full Text Available Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members' cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of individual cognition during social interactions, this study tests the extent to which collaborative and consultative decision rules impact the emergence of group rationality. Using a set of decision tasks adapted from the heuristics and biases literature, we evaluate rationality as the extent to which individual choices are aligned with a normative ideal. We further operationalize group rationality as cognitive synergy (the extent to which collective rationality exceeds average or best individual rationality in the group, and we test the effect of collaborative and consultative decision rules in a sample of 176 groups. Our results show that the collaborative decision rule has superior synergic effects as compared to the consultative decision rule. The ninety one groups working in a collaborative fashion made more rational choices (above and beyond the average rationality of their members than the eighty five groups working in a consultative fashion. Moreover, the groups using a collaborative decision rule were closer to the rationality of their best member than groups using consultative decision rules. Nevertheless, on average groups did not outperformed their best member. Therefore, our results reveal how decision rules prescribing interpersonal interactions impact on the emergence of collective cognitive competencies. They also open potential venues for further research on the emergence of collective rationality in human decision-making groups.

  3. The optimum decision rules for the oddity task

    NARCIS (Netherlands)

    Versfeld, N.J.; Dai, H.; Green, D.M.

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur

  4. Assessing predation risk: optimal behaviour and rules of thumb.

    Science.gov (United States)

    Welton, Nicky J; McNamara, John M; Houston, Alasdair I

    2003-12-01

    We look at a simple model in which an animal makes behavioural decisions over time in an environment in which all parameters are known to the animal except predation risk. In the model there is a trade-off between gaining information about predation risk and anti-predator behaviour. All predator attacks lead to death for the prey, so that the prey learns about predation risk by virtue of the fact that it is still alive. We show that it is not usually optimal to behave as if the current unbiased estimate of the predation risk is its true value. We consider two different ways to model reproduction; in the first scenario the animal reproduces throughout its life until it dies, and in the second scenario expected reproductive success depends on the level of energy reserves the animal has gained by some point in time. For both of these scenarios we find results on the form of the optimal strategy and give numerical examples which compare optimal behaviour with behaviour under simple rules of thumb. The numerical examples suggest that the value of the optimal strategy over the rules of thumb is greatest when there is little current information about predation risk, learning is not too costly in terms of predation, and it is energetically advantageous to learn about predation. We find that for the model and parameters investigated, a very simple rule of thumb such as 'use the best constant control' performs well.

  5. Rough set and rule-based multicriteria decision aiding

    Directory of Open Access Journals (Sweden)

    Roman Slowinski

    2012-08-01

    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  6. Optimal decisions principles of programming

    CERN Document Server

    Lange, Oskar

    1971-01-01

    Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t

  7. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  8. Consultation system with knowledge representation by decision rules

    Energy Technology Data Exchange (ETDEWEB)

    Senne, E L.F.; Simoni, P O

    1982-04-01

    The use of decision rules in the representation of empirical knowledge supplied by application domain experts is discussed. Based on this representation, a system is described which employs artificial intelligence techniques to yield inferences within a specific domain. Three modules composing the system are described: the acquisition one, that allows the insertion of new rules; the diagnostic one, that uses rules in the inference process; and the explanation one, that exhibits reasons for each system action.

  9. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  10. Business Rules Definition for Decision Support System Using Matrix Grammar

    Directory of Open Access Journals (Sweden)

    Eva Zámečníková

    2016-06-01

    Full Text Available This paper deals with formalization of business rules by formal grammars. In our work we focus on methods for high frequency data processing. We process data by using complex event platforms (CEP which allow to process high volume of data in nearly real time. Decision making process is contained by one level of processing of CEP. Business rules are used for decision making process description. For the business rules formalization we chose matrix grammar. The use of formal grammars is quite natural as the structure of rules and its rewriting is very similar both for the business rules and for formal grammar. In addition the matrix grammar allows to simulate dependencies and correlations between the rules. The result of this work is a model for data processing of knowledge-based decision support system described by the rules of formal grammar. This system will support the decision making in CEP. This solution may contribute to the speedup of decision making process in complex event processing and also to the formal verification of these systems.

  11. Choosing the rules: distinct and overlapping frontoparietal representations of task rules for perceptual decisions.

    Science.gov (United States)

    Zhang, Jiaxiang; Kriegeskorte, Nikolaus; Carlin, Johan D; Rowe, James B

    2013-07-17

    Behavior is governed by rules that associate stimuli with responses and outcomes. Human and monkey studies have shown that rule-specific information is widely represented in the frontoparietal cortex. However, it is not known how establishing a rule under different contexts affects its neural representation. Here, we use event-related functional MRI (fMRI) and multivoxel pattern classification methods to investigate the human brain's mechanisms of establishing and maintaining rules for multiple perceptual decision tasks. Rules were either chosen by participants or specifically instructed to them, and the fMRI activation patterns representing rule-specific information were compared between these contexts. We show that frontoparietal regions differ in the properties of their rule representations during active maintenance before execution. First, rule-specific information maintained in the dorsolateral and medial frontal cortex depends on the context in which it was established (chosen vs specified). Second, rule representations maintained in the ventrolateral frontal and parietal cortex are independent of the context in which they were established. Furthermore, we found that the rule-specific coding maintained in anticipation of stimuli may change with execution of the rule: representations in context-independent regions remain invariant from maintenance to execution stages, whereas rule representations in context-dependent regions do not generalize to execution stage. The identification of distinct frontoparietal systems with context-independent and context-dependent task rule representations, and the distinction between anticipatory and executive rule representations, provide new insights into the functional architecture of goal-directed behavior.

  12. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  13. Unanimity rule and organizational decision-making : a simulation model

    NARCIS (Netherlands)

    Romme, A.G.L.

    2004-01-01

    Unanimity rule is an important benchmark for evaluating outcomes of decisions in the social sciences. However, organizational researchers tend to ignore unanimous decision making, for example, because unanimity may be difficult to realize in large groups and may suffer from individual participants

  14. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  15. Transformative decision rules, permutability, and non-sequential framing of decision problems

    NARCIS (Netherlands)

    Peterson, M.B.

    2004-01-01

    The concept of transformative decision rules provides auseful tool for analyzing what is often referred to as the`framing', or `problem specification', or `editing' phase ofdecision making. In the present study we analyze a fundamentalaspect of transformative decision rules, viz. permutability. A

  16. The optimum decision rules for the oddity task.

    Science.gov (United States)

    Versfeld, N J; Dai, H; Green, D M

    1996-01-01

    This paper presents the optimum decision rule for an m-interval oddity task in which m-1 intervals contain the same signal and one is different or odd. The optimum decision rule depends on the degree of correlation among observations. The present approach unifies the different strategies that occur with "roved" or "fixed" experiments (Macmillan & Creelman, 1991, p. 147). It is shown that the commonly used decision rule for an m-interval oddity task corresponds to the special case of highly correlated observations. However, as is also true for the same-different paradigm, there exists a different optimum decision rule when the observations are independent. The relation between the probability of a correct response and d' is derived for the three-interval oddity task. Tables are presented of this relation for the three-, four-, and five-interval oddity task. Finally, an experimental method is proposed that allows one to determine the decision rule used by the observer in an oddity experiment.

  17. Understanding Optimal Decision-making in Wargaming

    OpenAIRE

    Nesbitt, P; Kennedy, Q; Alt, JK; Fricker, RD; Whitaker, L; Yang, J; Appleget, JA; Huston, J; Patton, S

    2013-01-01

    Approved for public release; distribution is unlimited. This research aims to gain insight into optimal wargaming decision-making mechanisms using neurophysiological measures by investigating whether brain activation and visual scan patterns predict attention, perception, and/or decision-making errors through human-in-the-loop wargaming simulation experiments. We investigate whether brain activity and visual scan patterns can explain optimal wargaming decision making and its devel...

  18. Concurrent approach for evolving compact decision rule sets

    Science.gov (United States)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  19. Robust Management of Combined Heat and Power Systems via Linear Decision Rules

    DEFF Research Database (Denmark)

    Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik

    2014-01-01

    The heat and power outputs of Combined Heat and Power (CHP) units are jointly constrained. Hence, the optimal management of systems including CHP units is a multicommodity optimization problem. Problems of this type are stochastic, owing to the uncertainty inherent both in the demand for heat and...... linear decision rules to guarantee both tractability and a correct representation of the dynamic aspects of the problem. Numerical results from an illustrative example confirm the value of the proposed approach....

  20. Greedy Algorithm for the Construction of Approximate Decision Rules for Decision Tables with Many-Valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail; Zielosko, Beata

    2016-01-01

    The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.

  1. Greedy Algorithm for the Construction of Approximate Decision Rules for Decision Tables with Many-Valued Decisions

    KAUST Repository

    Azad, Mohammad

    2016-10-20

    The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.

  2. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  3. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  4. Reservoir Operating Rule Optimization for California's Sacramento Valley

    Directory of Open Access Journals (Sweden)

    Timothy Nelson

    2016-03-01

    Full Text Available doi: http://dx.doi.org/10.15447/sfews.2016v14iss1art6Reservoir operating rules for water resource systems are typically developed by combining intuition, professional discussion, and simulation modeling. This paper describes a joint optimization–simulation approach to develop preliminary economically-based operating rules for major reservoirs in California’s Sacramento Valley, based on optimized results from CALVIN, a hydro-economic optimization model. We infer strategic operating rules from the optimization model results, including storage allocation rules to balance storage among multiple reservoirs, and reservoir release rules to determine monthly release for individual reservoirs. Results show the potential utility of considering previous year type on water availability and various system and sub-system storage conditions, in addition to normal consideration of local reservoir storage, season, and current inflows. We create a simple simulation to further refine and test the derived operating rules. Optimization model results show particular insights for balancing the allocation of water storage among Shasta, Trinity, and Oroville reservoirs over drawdown and refill seasons, as well as some insights for release rules at major reservoirs in the Sacramento Valley. We also discuss the applicability and limitations of developing reservoir operation rules from optimization model results.

  5. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach. The results of experiments for decision tables from UCI Machine Learning Repository are discussed. © 2013 Springer-Verlag.

  6. Portable Rule Extraction Method for Neural Network Decisions Reasoning

    Directory of Open Access Journals (Sweden)

    Darius PLIKYNAS

    2005-08-01

    Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.

  7. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  8. Sensitivity of a Clinical Decision Rule and Early Computed Tomography in Aneurysmal Subarachnoid Hemorrhage

    Directory of Open Access Journals (Sweden)

    Dustin G. Mark

    2015-10-01

    Full Text Available Introduction: Application of a clinical decision rule for subarachnoid hemorrhage, in combination with cranial computed tomography (CT performed within six hours of ictus (early cranial CT, may be able to reasonably exclude a diagnosis of aneurysmal subarachnoid hemorrhage (aSAH. This study’s objective was to examine the sensitivity of both early cranial CT and a previously validated clinical decision rule among emergency department (ED patients with aSAH and a normal mental status. Methods: Patients were evaluated in the 21 EDs of an integrated health delivery system between January 2007 and June 2013. We identified by chart review a retrospective cohort of patients diagnosed with aSAH in the setting of a normal mental status and performance of early cranial CT. Variables comprising the SAH clinical decision rule (age >40, presence of neck pain or stiffness, headache onset with exertion, loss of consciousness at headache onset were abstracted from the chart and assessed for inter-rater reliability. Results: One hundred fifty-five patients with aSAH met study inclusion criteria. The sensitivity of early cranial CT was 95.5% (95% CI [90.9-98.2]. The sensitivity of the SAH clinical decision rule was also 95.5% (95% CI [90.9-98.2]. Since all false negative cases for each diagnostic modality were mutually independent, the combined use of both early cranial CT and the clinical decision rule improved sensitivity to 100% (95% CI [97.6-100.0]. Conclusion: Neither early cranial CT nor the SAH clinical decision rule demonstrated ideal sensitivity for aSAH in this retrospective cohort. However, the combination of both strategies might optimize sensitivity for this life-threatening disease.

  9. Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study

    KAUST Repository

    Azad, Mohammad

    2013-10-04

    In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.

  10. Decision Rules, Trees and Tests for Tables with Many-valued Decisions–comparative Study

    KAUST Repository

    Azad, Mohammad; Zielosko, Beata; Moshkov, Mikhail; Chikalov, Igor

    2013-01-01

    In this paper, we present three approaches for construction of decision rules for decision tables with many-valued decisions. We construct decision rules directly for rows of decision table, based on paths in decision tree, and based on attributes contained in a test (super-reduct). Experimental results for the data sets taken from UCI Machine Learning Repository, contain comparison of the maximum and the average length of rules for the mentioned approaches.

  11. Learning Dispatching Rules for Scheduling: A Synergistic View Comprising Decision Trees, Tabu Search and Simulation

    Directory of Open Access Journals (Sweden)

    Atif Shahzad

    2016-02-01

    Full Text Available A promising approach for an effective shop scheduling that synergizes the benefits of the combinatorial optimization, supervised learning and discrete-event simulation is presented. Though dispatching rules are in widely used by shop scheduling practitioners, only ordinary performance rules are known; hence, dynamic generation of dispatching rules is desired to make them more effective in changing shop conditions. Meta-heuristics are able to perform quite well and carry more knowledge of the problem domain, however at the cost of prohibitive computational effort in real-time. The primary purpose of this research lies in an offline extraction of this domain knowledge using decision trees to generate simple if-then rules that subsequently act as dispatching rules for scheduling in an online manner. We used similarity index to identify parametric and structural similarity in problem instances in order to implicitly support the learning algorithm for effective rule generation and quality index for relative ranking of the dispatching decisions. Maximum lateness is used as the scheduling objective in a job shop scheduling environment.

  12. Understanding Optimal Decision-Making in Wargaming

    Science.gov (United States)

    2013-10-01

    beneficial outcomes from wargaming, one of which is a better understanding of the impact of decisions as a part of combat processes. However, using...under instrument flight rules ( IFR ) (Bellenkes et al., 1997; Katoh, 1997). Of note, eye-tracking technology also has been applied to investigate...Neuroscience, 7 . Skinner, A., Berka, C., Ohara-Long, L., & Sebrechts, M. (2010). Impact of Virtual En- vironment Fidelity on Behavioral and

  13. Optimized reaction mechanism rate rules for ignition of normal alkanes

    KAUST Repository

    Cai, Liming

    2016-08-11

    The increasing demand for cleaner combustion and reduced greenhouse gas emissions motivates research on the combustion of hydrocarbon fuels and their surrogates. Accurate detailed chemical kinetic models are an important prerequisite for high fidelity reacting flow simulations capable of improving combustor design and operation. The development of such models for many new fuel components and/or surrogate molecules is greatly facilitated by the application of reaction classes and rate rules. Accurate and versatile rate rules are desirable to improve the predictive accuracy of kinetic models. A major contribution in the literature is the recent work by Bugler et al. (2015), which has significantly improved rate rules and thermochemical parameters used in kinetic modeling of alkanes. In the present study, it is demonstrated that rate rules can be used and consistently optimized for a set of normal alkanes including n-heptane, n-octane, n-nonane, n-decane, and n-undecane, thereby improving the predictive accuracy for all the considered fuels. A Bayesian framework is applied in the calibration of the rate rules. The optimized rate rules are subsequently applied to generate a mechanism for n-dodecane, which was not part of the training set for the optimized rate rules. The developed mechanism shows accurate predictions compared with published well-validated mechanisms for a wide range of conditions.

  14. Optimization In Searching Daily Rule Curve At Mosul Regulating Reservoir, North Iraq Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Thair M. Al-Taiee

    2013-05-01

    Full Text Available To obtain optimal operating rules for storage reservoirs, large numbers of simulation and optimization models have been developed over the past several decades, which vary significantly in their mechanisms and applications. Rule curves are guidelines for long term reservoir operation. An efficient technique is required to find the optimal rule curves that can mitigate water shortage in long term operation. The investigation of developed Genetic Algorithm (GA technique, which is an optimization approach base on the mechanics of natural selection, derived from the theory of natural evolution, was carried out to through the application to predict the daily rule curve of  Mosul regulating reservoir in Iraq.  Record daily inflows, outflow, water level in the reservoir for 19 year (1986-1990 and (1994-2007 were used in the developed model for assessing the optimal reservoir operation. The objective function is set to minimize the annual sum of squared deviation from the desired downstream release and desired storage volume in the reservoir. The decision variables are releases, storage volume, water level and outlet (demand from the reservoir. The results of the GA model gave a good agreement during the comparison with the actual rule curve and the designed rating curve of the reservoir. The simulated result shows that GA-derived policies are promising and competitive and can be effectively used for daily reservoir operation in addition to the rational monthly operation and predicting also rating curve of reservoirs.

  15. The Use of a Modification of the Hurwicz’s Decision Rule in Multicriteria Decision Making under Complete Uncertainty

    Directory of Open Access Journals (Sweden)

    Helena Gaspars-Wieloch

    2014-12-01

    Full Text Available The paper concerns multicriteria decision making under uncertainty with scenario planning. This topic is explored by many researchers because almost all real-world decision problems have multiple conflicting criteria and a deterministic criteria evaluation is often impossible (e.g. mergers and acquisitions, new product development. We propose two procedures for uncertain multi-objective optimization (for dependent and independent criteria matrices which are based on the SAPO method – a modification of the Hurwicz’s rule for one-criterion problems, recently presented in another paper. The new approaches take into account the decision maker’s preference structure and attitude towards risk. It considers the frequency and the level of extreme evaluations and generates logic rankings for symmetric and asymmetric distributions. The application of the suggested tool is illustrated with an example of marketing strategies selection.

  16. International Conference on Optimization and Decision Science

    CERN Document Server

    Sterle, Claudio

    2017-01-01

    This proceedings volume highlights the state-of-the-art knowledge related to optimization, decisions science and problem solving methods, as well as their application in industrial and territorial systems. It includes contributions tackling these themes using models and methods based on continuous and discrete optimization, network optimization, simulation and system dynamics, heuristics, metaheuristics, artificial intelligence, analytics, and also multiple-criteria decision making. The number and the increasing size of the problems arising in real life require mathematical models and solution methods adequate to their complexity. There has also been increasing research interest in Big Data and related challenges. These challenges can be recognized in many fields and systems which have a significant impact on our way of living: design, management and control of industrial production of goods and services; transportation planning and traffic management in urban and regional areas; energy production and exploit...

  17. Biometric image enhancement using decision rule based image fusion techniques

    Science.gov (United States)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  18. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor

    2017-10-19

    This paper is devoted to the study of bi-criteria optimization problems for decision trees. We consider different cost functions such as depth, average depth, and number of nodes. We design algorithms that allow us to construct the set of Pareto optimal points (POPs) for a given decision table and the corresponding bi-criteria optimization problem. These algorithms are suitable for investigation of medium-sized decision tables. We discuss three examples of applications of the created tools: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees, and comparison of different greedy algorithms for decision tree construction as single- and bi-criteria optimization algorithms.

  19. Incorporation of systematic uncertainties in statistical decision rules

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1994-02-01

    The influence of systematic uncertainties on statistical hypothesis testing is an underexposed subject. Systematic uncertainties cannot be incorporated in hypothesis tests, but they deteriorate the performance of these tests. A wrong treatment of systematic uncertainties in verification applications in safeguards leads to false assessment of the strength of the safeguards measure, and thus undermines the safeguards system. The effects of systematic uncertainties on decision errors in hypothesis testing are analyzed quantitatively for an example from the safeguards practice. (LEU-HEU verification of UF 6 enrichment in centrifuge enrichment plants). It is found that the only proper way to tackle systematic uncertainties is reduction to sufficiently low levels; criteria for these are proposed. Although conclusions were obtained from study of a single practical application, it is believed that they hold generally: for all sources of systematic uncertainties, all statistical decision rules, and all applications. (orig./HP)

  20. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  1. Optimization of conventional rule curves coupled with hedging rules for reservoir operation

    DEFF Research Database (Denmark)

    Taghian, Mehrdad; Rosbjerg, Dan; Haghighi, Ali

    2014-01-01

    As a common approach to reservoir operating policies, water levels at the end of each time interval should be kept at or above the rule curve. In this study, the policy is captured using rationing of the target yield to reduce the intensity of severe water shortages. For this purpose, a hybrid...... to achieve the optimal water allocation and the target storage levels for reservoirs. As a case study, a multipurpose, multireservoir system in southern Iran is selected. The results show that the model has good performance in extracting the optimum policy for reservoir operation under both normal...... model is developed to optimize simultaneously both the conventional rule curve and the hedging rule. In the compound model, a simple genetic algorithm is coupled with a simulation program, including an inner linear programming algorithm. In this way, operational policies are imposed by priority concepts...

  2. Second Order Optimality in Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2017-01-01

    Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf

  3. 46 CFR 201.3 - Authentication of rules, orders, determinations and decisions of the Administration.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Authentication of rules, orders, determinations and decisions of the Administration. 201.3 Section 201.3 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF....3 Authentication of rules, orders, determinations and decisions of the Administration. All rules...

  4. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  5. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values

  6. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-01-01

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  7. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-07-10

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  8. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory

  9. Multi-stage optimization of decision and inhibitory trees for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad

    2017-06-16

    We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.

  10. Multi-stage optimization of decision and inhibitory trees for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2017-01-01

    We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.

  11. Optimal condition-based maintenance decisions for systems with dependent stochastic degradation of components

    International Nuclear Information System (INIS)

    Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.

    2014-01-01

    Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples

  12. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  13. Interaction rules underlying group decisions in homing pigeons

    Science.gov (United States)

    Pettit, Benjamin; Perna, Andrea; Biro, Dora; Sumpter, David J. T.

    2013-01-01

    Travelling in groups gives animals opportunities to share route information by following cues from each other's movement. The outcome of group navigation will depend on how individuals respond to each other within a flock, school, swarm or herd. Despite the abundance of modelling studies, only recently have researchers developed techniques to determine the interaction rules among real animals. Here, we use high-resolution GPS (global positioning system) tracking to study these interactions in pairs of pigeons flying home from a familiar site. Momentary changes in velocity indicate alignment with the neighbour's direction, as well as attraction or avoidance depending on distance. Responses were stronger when the neighbour was in front. From the flocking behaviour, we develop a model to predict features of group navigation. Specifically, we show that the interactions between pigeons stabilize a side-by-side configuration, promoting bidirectional information transfer and reducing the risk of separation. However, if one bird gets in front it will lead directional choices. Our model further predicts, and observations confirm, that a faster bird (as measured from solo flights) will fly slightly in front and thus dominate the choice of homing route. Our results explain how group decisions emerge from individual differences in homing flight behaviour. PMID:24068173

  14. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters and the number of optimal decision trees. © 2010 Springer-Verlag Berlin Heidelberg.

  15. Proposal optimization in nuclear accident emergency decision based on IAHP

    International Nuclear Information System (INIS)

    Xin Jing

    2007-01-01

    On the basis of establishing the multi-layer structure of nuclear accident emergency decision, several decision objectives are synthetically analyzed, and an optimization model of decision proposals for nuclear accident emergency based on interval analytic hierarchy process is proposed in the paper. The model makes comparisons among several emergency decision proposals quantified, and the optimum proposal is selected out, which solved the uncertain and fuzzy decision problem of judgments by experts' experiences in nuclear accidents emergency decision. Case study shows that the optimization result is much more reasonable, objective and reliable than subjective judgments, and it could be decision references for nuclear accident emergency. (authors)

  16. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  17. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters

  18. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  19. Integrated Case Based and Rule Based Reasoning for Decision Support

    OpenAIRE

    Eshete, Azeb Bekele

    2009-01-01

    This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rule based reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rule based reasoning method with an existing case based system. This project focu...

  20. Implementation of adapted PECARN decision rule for children with minor head injury in the pediatric emergency department.

    Science.gov (United States)

    Bressan, Silvia; Romanato, Sabrina; Mion, Teresa; Zanconato, Stefania; Da Dalt, Liviana

    2012-07-01

    Of the currently published clinical decision rules for the management of minor head injury (MHI) in children, the Pediatric Emergency Care Applied Research Network (PECARN) rule, derived and validated in a large multicenter prospective study cohort, with high methodologic standards, appears to be the best clinical decision rule to accurately identify children at very low risk of clinically important traumatic brain injuries (ciTBI) in the pediatric emergency department (PED). This study describes the implementation of an adapted version of the PECARN rule in a tertiary care academic PED in Italy and evaluates implementation success, in terms of medical staff adherence and satisfaction, as well as its effects on clinical practice. The adapted PECARN decision rule algorithms for children (one for those younger than 2 years and one for those older than 2 years) were actively implemented in the PED of Padova, Italy, for a 6-month testing period. Adherence and satisfaction of medical staff to the new rule were calculated. Data from 356 visits for MHI during PECARN rule implementation and those of 288 patients attending the PED for MHI in the previous 6 months were compared for changes in computed tomography (CT) scan rate, ciTBI rate (defined as death, neurosurgery, intubation for longer than 24 hours, or hospital admission at least for two nights associated with TBI) and return visits for symptoms or signs potentially related to MHI. The safety and efficacy of the adapted PECARN rule in clinical practice were also calculated. Adherence to the adapted PECARN rule was 93.5%. The percentage of medical staff satisfied with the new rule, in terms of usefulness and ease of use for rapid decision-making, was significantly higher (96% vs. 51%, puse of the adapted PECARN rule in clinical practice was 100% (95% CI=36.8 to 100; three of three patients with ciTBI who received CT scan at first evaluation), while efficacy was 92.3% (95% CI=89 to 95; 326 of 353 patients without ci

  1. Humans Optimize Decision-Making by Delaying Decision Onset

    Science.gov (United States)

    Teichert, Tobias; Ferrera, Vincent P.; Grinband, Jack

    2014-01-01

    Why do humans make errors on seemingly trivial perceptual decisions? It has been shown that such errors occur in part because the decision process (evidence accumulation) is initiated before selective attention has isolated the relevant sensory information from salient distractors. Nevertheless, it is typically assumed that subjects increase accuracy by prolonging the decision process rather than delaying decision onset. To date it has not been tested whether humans can strategically delay decision onset to increase response accuracy. To address this question we measured the time course of selective attention in a motion interference task using a novel variant of the response signal paradigm. Based on these measurements we estimated time-dependent drift rate and showed that subjects should in principle be able trade speed for accuracy very effectively by delaying decision onset. Using the time-dependent estimate of drift rate we show that subjects indeed delay decision onset in addition to raising response threshold when asked to stress accuracy over speed in a free reaction version of the same motion-interference task. These findings show that decision onset is a critical aspect of the decision process that can be adjusted to effectively improve decision accuracy. PMID:24599295

  2. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  3. Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints

    Science.gov (United States)

    Dongnan, L.; Wan, W.; Zhao, J.

    2017-12-01

    Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.

  4. Future Costs, Fixed Healthcare Budgets, and the Decision Rules of Cost-Effectiveness Analysis.

    Science.gov (United States)

    van Baal, Pieter; Meltzer, David; Brouwer, Werner

    2016-02-01

    Life-saving medical technologies result in additional demand for health care due to increased life expectancy. However, most economic evaluations do not include all medical costs that may result from this additional demand in health care and include only future costs of related illnesses. Although there has been much debate regarding the question to which extent future costs should be included from a societal perspective, the appropriate role of future medical costs in the widely adopted but more narrow healthcare perspective has been neglected. Using a theoretical model, we demonstrate that optimal decision rules for cost-effectiveness analyses assuming fixed healthcare budgets dictate that future costs of both related and unrelated medical care should be included. Practical relevance of including the costs of future unrelated medical care is illustrated using the example of transcatheter aortic valve implantation. Our findings suggest that guidelines should prescribe inclusion of these costs. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Decision fusion recognition based on modified evidence rule

    Institute of Scientific and Technical Information of China (English)

    黎湘; 刘永祥; 付耀文; 庄钊文

    2001-01-01

    A modified evidence combination rule with a combination parameter λ is proposed to solve some problems in D-S theory by considering the correlation and complement among the evidences as well as the size and intersection of subsets in evidence. It can get reasonable results even the evidences are conflicting. Applying this rule to the real infrared/millimetre wave fusion system, a satisfactory result has been obtained.

  6. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to

  7. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  8. Decision rules and group rationality: cognitive gain or standstill?

    NARCIS (Netherlands)

    Curseu, P.L.; Jansen, R.J.G.; Chappin, M.M.H.

    2013-01-01

    Recent research in group cognition points towards the existence of collective cognitive competencies that transcend individual group members’ cognitive competencies. Since rationality is a key cognitive competence for group decision making, and group cognition emerges from the coordination of

  9. Optimization of tactical decisions: subjective and objective conditionality

    Directory of Open Access Journals (Sweden)

    Олег Юрійович Булулуков

    2016-06-01

    Full Text Available In the article «human» and «objective» factors are investigated that influencing on optimization of tactical decisions. Attention is accented on dependence of the got information about the circumstances of crime from the acceptance of correct decisions an investigator. Connection between efficiency of investigation and acceptance of optimal tactical decisions is underlined. The declared problem is not investigational in literature in a sufficient measure. Its separate aspects found the reflection in works: D. А. Solodova, S. Yu. Yakushina and others. Some questions related to optimization of investigation and making decision an investigator we discover in works: R. S. Belkin, V. А. Juravel, V. Е. Konovalova, V. L. Sinchuk, B. V. Shur, V. Yu. Shepitko. The aim of the article is determination of term «optimization», as it applies to tactical decisions in criminalistics, and also consideration of influence of human and objective factors on the acceptance of optimal decisions at investigation of crimes. In the article etymology of term is considered «optimization» and interpretation of its is given as it applies to the acceptance of tactical decisions. The types of mark human and objective factors, stipulating optimization of tactical decisions. The last assists efficiency of tactics of investigation of crimes. At consideration of «human factors» of influencing on optimization decisions, attention applies on «psychological traps» can take place at making decision. Among them such are named, as: anchoring; status quo; irreversible expenses; desired and actual; incorrect formulation; conceit; reinsurance; constancy of memory. Underlined, absence of unambiguity in the brought list over of «objective factors» influencing at choice tactical decision. The different understanding of «tactical risk» is argued, as a factor influencing on an acceptance tactical decisions. The analysis of «human» and «objective» factors influencing on

  10. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    Science.gov (United States)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the

  11. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  12. A simple threshold rule is sufficient to explain sophisticated collective decision-making.

    Directory of Open Access Journals (Sweden)

    Elva J H Robinson

    Full Text Available Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis, in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.

  13. Understanding Decision-Making, Communication Rules, and Communication Satisfaction as Culture: Implications for Organizational Effectiveness.

    Science.gov (United States)

    Shockley-Zalabak, Pamela

    A study of decision making processes and communication rules, in a corporate setting undergoing change as a result of organizational ineffectiveness, examined whether (1) decisions about formal communication reporting systems were linked to management assumptions about technical creativity/effectiveness, (2) assumptions about…

  14. 48 CFR 6101.27 - Relief from decision or order [Rule 27].

    Science.gov (United States)

    2010-10-01

    ... order [Rule 27]. (a) Grounds. The Board may relieve a party from the operation of a final decision or... discovered, even through due diligence; (2) Justifiable or excusable mistake, inadvertence, surprise, or neglect; (3) Fraud, misrepresentation, or other misconduct of an adverse party; (4) The decision has been...

  15. A simple threshold rule is sufficient to explain sophisticated collective decision-making.

    Science.gov (United States)

    Robinson, Elva J H; Franks, Nigel R; Ellis, Samuel; Okuda, Saki; Marshall, James A R

    2011-01-01

    Decision-making animals can use slow-but-accurate strategies, such as making multiple comparisons, or opt for simpler, faster strategies to find a 'good enough' option. Social animals make collective decisions about many group behaviours including foraging and migration. The key to the collective choice lies with individual behaviour. We present a case study of a collective decision-making process (house-hunting ants, Temnothorax albipennis), in which a previously proposed decision strategy involved both quality-dependent hesitancy and direct comparisons of nests by scouts. An alternative possible decision strategy is that scouting ants use a very simple quality-dependent threshold rule to decide whether to recruit nest-mates to a new site or search for alternatives. We use analytical and simulation modelling to demonstrate that this simple rule is sufficient to explain empirical patterns from three studies of collective decision-making in ants, and can account parsimoniously for apparent comparison by individuals and apparent hesitancy (recruitment latency) effects, when available nests differ strongly in quality. This highlights the need to carefully design experiments to detect individual comparison. We present empirical data strongly suggesting that best-of-n comparison is not used by individual ants, although individual sequential comparisons are not ruled out. However, by using a simple threshold rule, decision-making groups are able to effectively compare options, without relying on any form of direct comparison of alternatives by individuals. This parsimonious mechanism could promote collective rationality in group decision-making.

  16. Ant-based extraction of rules in simple decision systems over ontological graphs

    Directory of Open Access Journals (Sweden)

    Pancerz Krzysztof

    2015-06-01

    Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

  17. Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.

    Science.gov (United States)

    Ell, Shawn W

    2013-12-01

    The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.

  18. Geriatric Fever Score: a new decision rule for geriatric care.

    Directory of Open Access Journals (Sweden)

    Min-Hsien Chung

    Full Text Available Evaluating geriatric patients with fever is time-consuming and challenging. We investigated independent mortality predictors of geriatric patients with fever and developed a prediction rule for emergency care, critical care, and geriatric care physicians to classify patients into mortality risk and disposition groups.Consecutive geriatric patients (≥65 years old visiting the emergency department (ED of a university-affiliated medical center between June 1 and July 21, 2010, were enrolled when they met the criteria of fever: a tympanic temperature ≥37.2°C or a baseline temperature elevated ≥1.3°C. Thirty-day mortality was the primary endpoint. Internal validation with bootstrap re-sampling was done.Three hundred thirty geriatric patients were enrolled. We found three independent mortality predictors: Leukocytosis (WBC >12,000 cells/mm3, Severe coma (GCS ≤ 8, and Thrombocytopenia (platelets <150 10(3/mm3 (LST. After assigning weights to each predictor, we developed a Geriatric Fever Score that stratifies patients into two mortality-risk and disposition groups: low (4.0% (95% CI: 2.3-6.9%: a general ward or treatment in the ED then discharge and high (30.3% (95% CI: 17.4-47.3%: consider the intensive care unit. The area under the curve for the rule was 0.73.We found that the Geriatric Fever Score is a simple and rapid rule for predicting 30-day mortality and classifying mortality risk and disposition in geriatric patients with fever, although external validation should be performed to confirm its usefulness in other clinical settings. It might help preserve medical resources for patients in greater need.

  19. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  20. Benefiting from deep-level diversity : How congruence between knowledge and decision rules improves team decision making and team perceptions

    NARCIS (Netherlands)

    Rink, Floor; Ellemers, Naomi

    In two experiments we show how teams can benefit from the presence of multiple sources of deep-level task-related diversity. We manipulated differences (vs. similarities) in task information and personal decision rules in dyads (Study 1) and three-person teams (Study 2). The results indicate that

  1. Optimal Rules of Negligent Misrepresentation in Insurance Law

    DEFF Research Database (Denmark)

    Lando, Henrik

    This article analyzes rules for negligent misrepresentation in insurance contract law. Before contract signature, the applicant can be asked by the insurer to fill in a questionnaire concerning the risk, and may then omit or make untrue statements about facts. Such misrepresentation is considered...... negligent by the court when it is unclear the misrepresentation was due to a mistake or intentional. Rules of negligent misrepresentation differ significantly across jurisdictions. For example, the rule of common law allows the insurer to rescind the contract, whereas the German rule does not allow...... of these rules through an analysis of the degree to which the insured should be allowed to lower coverage in case of negligent misrepresentation. On the one hand, a strict rule renders it easier for an insurer to separate different types of risk without having to use other costly means of separation...

  2. Identification of Optimal Preventive Maintenance Decisions for Composite Components

    NARCIS (Netherlands)

    Laks, P.; Verhagen, W.J.C.; Gherman, B.; Porumbel, I.

    2018-01-01

    This research proposes a decision support tool which identifies cost-optimal maintenance decisions for a given planning period. Simultaneously, the reliability state of the component is kept at or below a given reliability threshold: a failure limit policy applies. The tool is developed to support

  3. Optimal soil venting design using Bayesian Decision analysis

    OpenAIRE

    Kaluarachchi, J. J.; Wijedasa, A. H.

    1994-01-01

    Remediation of hydrocarbon-contaminated sites can be costly and the design process becomes complex in the presence of parameter uncertainty. Classical decision theory related to remediation design requires the parameter uncertainties to be stipulated in terms of statistical estimates based on site observations. In the absence of detailed data on parameter uncertainty, classical decision theory provides little contribution in designing a risk-based optimal design strategy. Bayesian decision th...

  4. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  5. A study of diverse clinical decision support rule authoring environments and requirements for integration

    Directory of Open Access Journals (Sweden)

    Zhou Li

    2012-11-01

    Full Text Available Abstract Background Efficient rule authoring tools are critical to allow clinical Knowledge Engineers (KEs, Software Engineers (SEs, and Subject Matter Experts (SMEs to convert medical knowledge into machine executable clinical decision support rules. The goal of this analysis was to identify the critical success factors and challenges of a fully functioning Rule Authoring Environment (RAE in order to define requirements for a scalable, comprehensive tool to manage enterprise level rules. Methods The authors evaluated RAEs in active use across Partners Healthcare, including enterprise wide, ambulatory only, and system specific tools, with a focus on rule editors for reminder and medication rules. We conducted meetings with users of these RAEs to discuss their general experience and perceived advantages and limitations of these tools. Results While the overall rule authoring process is similar across the 10 separate RAEs, the system capabilities and architecture vary widely. Most current RAEs limit the ability of the clinical decision support (CDS interventions to be standardized, sharable, interoperable, and extensible. No existing system meets all requirements defined by knowledge management users. Conclusions A successful, scalable, integrated rule authoring environment will need to support a number of key requirements and functions in the areas of knowledge representation, metadata, terminology, authoring collaboration, user interface, integration with electronic health record (EHR systems, testing, and reporting.

  6. Combining Fuzzy AHP with GIS and Decision Rules for Industrial Site Selection

    Directory of Open Access Journals (Sweden)

    Aissa Taibi

    2017-12-01

    Full Text Available This study combines Fuzzy Analytic Hierarchy Process (FAHP, Geographic Information System (GIS and Decision rules to provide decision makers with a ranking model for industrial sites in Algeria. A ranking of the suitable industrial areas is a crucial multi-criteria decision problem based on socio-economical and technical criteria as on environmental considerations. Fuzzy AHP is used for assessment of the candidate industrial sites by combining fuzzy set theory and analytic hierarchy process (AHP. The decision rule base serves as a filter that performs criteria pre-treatment involving a reduction of their numbers. GIS is used to overlay, generate criteria maps and for visualizing ranked zones on the map. The rank of a zone so obtained is an index that guides decision-makers to the best utilization of the zone in future.

  7. Automatic generation of optimal business processes from business rules

    NARCIS (Netherlands)

    Steen, B.; Ferreira Pires, Luis; Iacob, Maria Eugenia

    2010-01-01

    In recent years, business process models are increasingly being used as a means for business process improvement. Business rules can be seen as requirements for business processes, in that they describe the constraints that must hold for business processes that implement these business rules.

  8. Stress influences decisions to break a safety rule in a complex simulation task in females.

    Science.gov (United States)

    Starcke, Katrin; Brand, Matthias; Kluge, Annette

    2016-07-01

    The current study examines the effects of acutely induced laboratory stress on a complex decision-making task, the Waste Water Treatment Simulation. Participants are instructed to follow a certain decision rule according to safety guidelines. Violations of this rule are associated with potential high rewards (working faster and earning more money) but also with the risk of a catastrophe (an explosion). Stress was induced with the Trier Social Stress Test while control participants underwent a non-stress condition. In the simulation task, stressed females broke the safety rule more often than unstressed females: χ(2) (1, N=24)=10.36, pbreak the safety rule because stressed female participants focused on the potential high gains while they neglected the risk of potential negative consequences. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Yi Ji

    2016-06-01

    Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

  10. On optimal soft-decision demodulation. [in digital communication system

    Science.gov (United States)

    Lee, L.-N.

    1976-01-01

    A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.

  11. Optimal contracts decision of industrial customers

    International Nuclear Information System (INIS)

    Tsay, M.-T.; Lin, W.-M.; Lee, J.-L.

    2001-01-01

    This paper develops a software package to calculate the optimal contract capacities for industrial customers. Based on the time-of-use (TOU) rates employed by the Taiwan Power Company, the objective function is formulated, to minimize the electricity bill of industrial customers during the whole year period. Evolutionary programming (EP) was adopted to solve this problem. Users can get the optimal contract capacities for the peak load, semi-peak load, and off-peak load, respectively. Practical load consumption data were used to prove the validity of this program. Results show that the software developed in this paper can be used as a useful tool for industrial customers in selecting contract capacities to curtail the electricity bill. (author)

  12. Decision making under internal uncertainty: the case of multiple-choice tests with different scoring rules.

    Science.gov (United States)

    Bereby-Meyer, Yoella; Meyer, Joachim; Budescu, David V

    2003-02-01

    This paper assesses framing effects on decision making with internal uncertainty, i.e., partial knowledge, by focusing on examinees' behavior in multiple-choice (MC) tests with different scoring rules. In two experiments participants answered a general-knowledge MC test that consisted of 34 solvable and 6 unsolvable items. Experiment 1 studied two scoring rules involving Positive (only gains) and Negative (only losses) scores. Although answering all items was the dominating strategy for both rules, the results revealed a greater tendency to answer under the Negative scoring rule. These results are in line with the predictions derived from Prospect Theory (PT) [Econometrica 47 (1979) 263]. The second experiment studied two scoring rules, which allowed respondents to exhibit partial knowledge. Under the Inclusion-scoring rule the respondents mark all answers that could be correct, and under the Exclusion-scoring rule they exclude all answers that might be incorrect. As predicted by PT, respondents took more risks under the Inclusion rule than under the Exclusion rule. The results illustrate that the basic process that underlies choice behavior under internal uncertainty and especially the effect of framing is similar to the process of choice under external uncertainty and can be described quite accurately by PT. Copyright 2002 Elsevier Science B.V.

  13. Simulation of operating rules and discretional decisions using a fuzzy rule-based system integrated into a water resources management model

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2013-04-01

    Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

  14. Optimized reaction mechanism rate rules for ignition of normal alkanes

    KAUST Repository

    Cai, Liming; Pitsch, Heinz; Mohamed, Samah; Raman, Venkat; Bugler, John; Curran, Henry; Sarathy, Mani

    2016-01-01

    fidelity reacting flow simulations capable of improving combustor design and operation. The development of such models for many new fuel components and/or surrogate molecules is greatly facilitated by the application of reaction classes and rate rules

  15. Determining rules for closing customer service centers: A public utility company's fuzzy decision

    Science.gov (United States)

    Dekorvin, Andre; Shipley, Margaret F.; Lea, Robert N.

    1992-01-01

    In the present work, we consider the general problem of knowledge acquisition under uncertainty. Simply stated, the problem reduces to the following: how can we capture the knowledge of an expert when the expert is unable to clearly formulate how he or she arrives at a decision? A commonly used method is to learn by examples. We observe how the expert solves specific cases and from this infer some rules by which the decision may have been made. Unique to our work is the fuzzy set representation of the conditions or attributes upon which the expert may possibly base his fuzzy decision. From our examples, we infer certain and possible fuzzy rules for closing a customer service center and illustrate the importance of having the decision closely relate to the conditions under consideration.

  16. Investigation of effective decision criteria for multiobjective optimization in IMRT.

    Science.gov (United States)

    Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H

    2011-06-01

    To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D

  17. Rules of Normalisation and their Importance for Interpretation of Systems of Optimal Taxation

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen

    representation of the general equilibrium conditions the rules of normalisation in standard optimal tax models. This allows us to provide an intuitive explanation of what determines the optimal tax system. Finally, we review a number of examples where lack of precision with respect to normalisation in otherwise...... important contributions to the literature on optimal taxation has given rise to misinterpretations of of analytical results....

  18. Optimization of European call options considering physical delivery network and reservoir operation rules

    Science.gov (United States)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2011-10-01

    This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.

  19. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2017-01-01

    : the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees

  20. Application of decision rules for empowering of Indonesian telematics services SMEs

    Science.gov (United States)

    Tosida, E. T.; Hairlangga, O.; Amirudin, F.; Ridwanah, M.

    2018-03-01

    The independence of the field of telematics became one of Indonesia's vision in 2024. One effort to achieve it can be done by empowering SMEs in the field of telematics. Empowerment carried out need a practical mechanism by utilizing data centered, including through the National Economic Census database (Susenas). Based on the Susenas can be formulated the decision rules of determining the provision of assistance for SMEs in the field of telematics. The way it did by generating the rule base through the classification technique. The CART algorithm-based decision rule model performs better than C45 and ID3 models. The high level of performance model is also in line with the regulations applied by the government. This becomes one of the strengths of research, because the resulting model is consistent with the existing conditions in Indonesia. The rules base generated from the three classification techniques show different rules. The CART technique has pattern matching with the realization of activities in The Ministry of Cooperatives and SMEs. So far, the government has difficulty in referring data related to the empowerment of SMEs telematics services. Therefore, the findings resulting from this research can be used as an alternative decision support system related to the program of empowerment of SMEs in telematics.

  1. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.

  2. Mathematical optimization of incore nuclear fuel management decisions: Status and trends

    International Nuclear Information System (INIS)

    Turinsky, P.J.

    1999-01-01

    Nuclear fuel management involves making decisions about the number of fresh assemblies to purchase and their Attributes (e.g. enrichment and burnable poison loading), burnt fuel to reinsert, location of the assemblies in the core (i.e. loading pattern (LP)), and insertion of control rods as a function of cycle exposure (i.e. control rod pattern (CRP)). The out-of-core and incore nuclear fuel management problems denote an artificial separation of decisions to simplify the decisionmaking. The out-of-core problem involves multicycle analysis so that levelized fuel cycle cost can be evaluated; whereas, the incore problem normally involves single cycle analysis. Decision variables for the incore problem normally include all of the above noted decisions with the exception of the number of fresh assemblies, which is restricted by discharge burnup limits and therefore involves multicycle considerations. This paper reports on the progress that is being made in addressing the incore nuclear fuel management problem utilizing formal mathematical optimization methods. Advances in utilizing the Simulating Annealing, Genetic Algorithm and Tabu Search methods, with applications to pressurized and boiling water reactor incore optimization problem, will be reviewed. Recent work on the addition of multiobjective optimization capability to aide the decision maker, and utilization of heuristic rules and incorporation of parallel algorithms to increase computational efficiency, will be discussed. (orig.) [de

  3. Integrative and distributive negotiation in small groups : Effects of task structure, decision rule, and social motive

    NARCIS (Netherlands)

    Beersma, Bianca; De Dreu, Carsten K W

    2002-01-01

    This study examined the interactive effects of task structure, decision rule, and social motive on small-group negotiation processes and outcomes. Three-person groups negotiated either within an asymmetrical task structure (in which a majority of group members have compatible interests) or within a

  4. Dispositional optimism, self-framing and medical decision-making.

    Science.gov (United States)

    Zhao, Xu; Huang, Chunlei; Li, Xuesong; Zhao, Xin; Peng, Jiaxi

    2015-03-01

    Self-framing is an important but underinvestigated area in risk communication and behavioural decision-making, especially in medical settings. The present study aimed to investigate the relationship among dispositional optimism, self-frame and decision-making. Participants (N = 500) responded to the Life Orientation Test-Revised and self-framing test of medical decision-making problem. The participants whose scores were higher than the middle value were regarded as highly optimistic individuals. The rest were regarded as low optimistic individuals. The results showed that compared to the high dispositional optimism group, participants from the low dispositional optimism group showed a greater tendency to use negative vocabulary to construct their self-frame, and tended to choose the radiation therapy with high treatment survival rate, but low 5-year survival rate. Based on the current findings, it can be concluded that self-framing effect still exists in medical situation and individual differences in dispositional optimism can influence the processing of information in a framed decision task, as well as risky decision-making. © 2014 International Union of Psychological Science.

  5. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach

  6. Optimal Rules of Negligent Misrepresentation in Insurance Contract Law

    DEFF Research Database (Denmark)

    Lando, Henrik

    2016-01-01

    Rules of misrepresentation in insurance contract law differ widely between jurisdictions. When the insured has negligently misrepresented a fact prior to contracting, common law allows the insurer to rescind the contract if the misrepresentation was material, while civil law countries apply more...

  7. Optimization of Approximate Inhibitory Rules Relative to Number of Misclassifications

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    In this work, we consider so-called nonredundant inhibitory rules, containing an expression “attribute:F value” on the right- hand side, for which the number of misclassifications is at most a threshold γ. We study a dynamic programming approach

  8. Investigating decision rules with a new experimental design: the EXACT paradigm

    Science.gov (United States)

    Biscione, Valerio; Harris, Christopher M.

    2015-01-01

    In the decision-making field, it is important to distinguish between the perceptual process (how information is collected) and the decision rule (the strategy governing decision-making). We propose a new paradigm, called EXogenous ACcumulation Task (EXACT) to disentangle these two components. The paradigm consists of showing a horizontal gauge that represents the probability of receiving a reward at time t and increases with time. The participant is asked to press a button when they want to request a reward. Thus, the perceptual mechanism is hard-coded and does not need to be inferred from the data. Based on this paradigm, we compared four decision rules (Bayes Risk, Reward Rate, Reward/Accuracy, and Modified Reward Rate) and found that participants appeared to behave according to the Modified Reward Rate. We propose a new way of analysing the data by using the accuracy of responses, which can only be inferred in classic RT tasks. Our analysis suggests that several experimental findings such as RT distribution and its relationship with experimental conditions, usually deemed to be the result of a rise-to-threshold process, may be simply explained by the effect of the decision rule employed. PMID:26578916

  9. Developing an optimal valve closing rule curve for real-time pressure control in pipes

    Energy Technology Data Exchange (ETDEWEB)

    Bazarganlari, Mohammad Reza; Afshar, Hossein [Islamic Azad University, Tehran (Iran, Islamic Republic of); Kerachian, Reza [University of Tehran, Tehran (Iran, Islamic Republic of); Bashiazghadi, Seyyed Nasser [Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2013-01-15

    Sudden valve closure in pipeline systems can cause high pressures that may lead to serious damages. Using an optimal valve closing rule can play an important role in managing extreme pressures in sudden valve closure. In this paper, an optimal closing rule curve is developed using a multi-objective optimization model and Bayesian networks (BNs) for controlling water pressure in valve closure instead of traditional step functions or single linear functions. The method of characteristics is used to simulate transient flow caused by valve closure. Non-dominated sorting genetic algorithms-II is also used to develop a Pareto front among three objectives related to maximum and minimum water pressures, and the amount of water passes through the valve during the valve-closing process. Simulation and optimization processes are usually time-consuming, thus results of the optimization model are used for training the BN. The trained BN is capable of determining optimal real-time closing rules without running costly simulation and optimization models. To demonstrate its efficiency, the proposed methodology is applied to a reservoir-pipe-valve system and the optimal closing rule curve is calculated for the valve. The results of the linear and BN-based valve closure rules show that the latter can significantly reduce the range of variations in water hammer pressures.

  10. Orthogonal search-based rule extraction for modelling the decision to transfuse.

    Science.gov (United States)

    Etchells, T A; Harrison, M J

    2006-04-01

    Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb 13 mm and Hb 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).

  11. Antagonistic and Bargaining Games in Optimal Marketing Decisions

    Science.gov (United States)

    Lipovetsky, S.

    2007-01-01

    Game theory approaches to find optimal marketing decisions are considered. Antagonistic games with and without complete information, and non-antagonistic games techniques are applied to paired comparison, ranking, or rating data for a firm and its competitors in the market. Mix strategy, equilibrium in bi-matrix games, bargaining models with…

  12. DECISION SUPPORT TOOL FOR RETAIL SHELF SPACE OPTIMIZATION

    OpenAIRE

    B. RAMASESHAN; N. R. ACHUTHAN; R. COLLINSON

    2008-01-01

    Efficient allocation of shelf space and product assortment can significantly improve a retailer's profitability. This paper addresses the problem from the perspective of an independent franchise retailer. A Category Management Decision Support Tool (CMDST) is proposed that efficiently generates optimal shelf space allocations and product assortments by using the existing scarce resources, resulting in increased profitability. CMDST utilizes two practical integrated category management models ...

  13. A Real-Time Holding Decision Rule Accounting for Passenger Travel Cost

    NARCIS (Netherlands)

    Laskaris,; Cats, O.; Jenelius, E; Viti, F

    2016-01-01

    Holding has been extensively investigated as a strategy to mitigate the inherently stochastic nature of public transport operations. Holding focuses on either regulating vehicle headways using a rule-based approach or minimizing passenger travel cost by employing optimization models. This paper

  14. Where should I send it? Optimizing the submission decision process.

    Directory of Open Access Journals (Sweden)

    Santiago Salinas

    Full Text Available How do scientists decide where to submit manuscripts? Many factors influence this decision, including prestige, acceptance probability, turnaround time, target audience, fit, and impact factor. Here, we present a framework for evaluating where to submit a manuscript based on the theory of Markov decision processes. We derive two models, one in which an author is trying to optimally maximize citations and another in which that goal is balanced by either minimizing the number of resubmissions or the total time in review. We parameterize the models with data on acceptance probability, submission-to-decision times, and impact factors for 61 ecology journals. We find that submission sequences beginning with Ecology Letters, Ecological Monographs, or PLOS ONE could be optimal depending on the importance given to time to acceptance or number of resubmissions. This analysis provides some guidance on where to submit a manuscript given the individual-specific values assigned to these disparate objectives.

  15. Application of stochastic optimization to nuclear power plant asset management decisions

    International Nuclear Information System (INIS)

    Morton, D.; Koc, A.; Hess, S. M.

    2013-01-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  16. Application of stochastic optimization to nuclear power plant asset management decisions

    Energy Technology Data Exchange (ETDEWEB)

    Morton, D. [Graduate Program in Operations Research and Industrial Engineering, University of Texas at Austin, Austin, TX, 78712 (United States); Koc, A. [IBM T.J. Watson Research Center, Business Analytics and Mathematical Sciences Dept., 1101 Kitchawan Rd., Yorktown Heights, NY, 10598 (United States); Hess, S. M. [Electric Power Research Institute, 300 Baywood Road, West Chester, PA, 19382 (United States)

    2013-07-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  17. A programmable rules engine to provide clinical decision support using HTML forms.

    Science.gov (United States)

    Heusinkveld, J; Geissbuhler, A; Sheshelidze, D; Miller, R

    1999-01-01

    The authors have developed a simple method for specifying rules to be applied to information on HTML forms. This approach allows clinical experts, who lack the programming expertise needed to write CGI scripts, to construct and maintain domain-specific knowledge and ordering capabilities within WizOrder, the order-entry and decision support system used at Vanderbilt Hospital. The clinical knowledge base maintainers use HTML editors to create forms and spreadsheet programs for rule entry. A test environment has been developed which uses Netscape to display forms; the production environment displays forms using an embedded browser.

  18. Confronting dynamics and uncertainty in optimal decision making for conservation

    Science.gov (United States)

    Williams, Byron K.; Johnson, Fred A.

    2013-06-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  19. Confronting dynamics and uncertainty in optimal decision making for conservation

    Science.gov (United States)

    Williams, Byron K.; Johnson, Fred A.

    2013-01-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  20. Confronting dynamics and uncertainty in optimal decision making for conservation

    International Nuclear Information System (INIS)

    Williams, Byron K; Johnson, Fred A

    2013-01-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  1. Particle swarm optimization of driving torque demand decision based on fuel economy for plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Shen, Peihong; Zhao, Zhiguo; Zhan, Xiaowen; Li, Jingwei

    2017-01-01

    In this paper, an energy management strategy based on logic threshold is proposed for a plug-in hybrid electric vehicle. The plug-in hybrid electric vehicle powertrain model is established using MATLAB/Simulink based on experimental tests of the power components, which is validated by the comparison with the verified simulation model which is built in the AVL Cruise. The influence of the driving torque demand decision on the fuel economy of plug-in hybrid electric vehicle is studied using a simulation. The optimization method for the driving torque demand decision, which refers to the relationship between the accelerator pedal opening and driving torque demand, from the perspective of fuel economy is formulated. The dynamically changing inertia weight particle swarm optimization is used to optimize the decision parameters. The simulation results show that the optimized driving torque demand decision can improve the PHEV fuel economy by 15.8% and 14.5% in the fuel economy test driving cycle of new European driving cycle and worldwide harmonized light vehicles test respectively, using the same rule-based energy management strategy. The proposed optimization method provides a theoretical guide for calibrating the parameters of driving torque demand decision to improve the fuel economy of the real plug-in hybrid electric vehicle. - Highlights: • The influence of the driving torque demand decision on the fuel economy is studied. • The optimization method for the driving torque demand decision is formulated. • An improved particle swarm optimization is utilized to optimize the parameters. • Fuel economy is improved by using the optimized driving torque demand decision.

  2. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    Science.gov (United States)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code

  3. Decision or norm: Judicial discretion as a treat to the rule of law

    Directory of Open Access Journals (Sweden)

    Avramović Dragutin

    2012-01-01

    Full Text Available Principle of legality and legal certainty, as key notions even of the thinnest concept of rule of law, are largely endangered in our times by widening of judicial discretion range. That trend is more and more at hand in European states as well, due to convergence of common law and civil law legal systems. Judicial decision acquires higher and higher factual importance in European legal systems, although it is generally not considered as a source of law. After analysis of standings by leading scholars of legal realism theory, the author admits that a very high level of tension frequently exists between judicial decision and legal norm. Within that conflict often and relatively easy decision succeeds to tear off by the strict letter of the law. In application of general legal rules upon concrete case, by creative adjustment of the law to life, due to necessary general and abstract character of legal norms, judge becomes more creator of law, rather than the one who applies it. The author points to danger of subjective and prejudiced attitudes of the judges, as they, due to their wide discretion, make a decision more upon their own feeling of justice, rather than upon law itself. In that way the law transforms itself in judicial decision based upon subjective understanding of justice and fairness.

  4. Comparison of rule induction, decision trees and formal concept analysis approaches for classification

    Science.gov (United States)

    Kotelnikov, E. V.; Milov, V. R.

    2018-05-01

    Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.

  5. Transparency in Economic and Political Decision-Making: The Identification of Sunshine Rules for Transparent Lobbying

    Directory of Open Access Journals (Sweden)

    Laboutková Šárka

    2017-09-01

    Full Text Available Lobbying transparency seems to have been a challenging topic for nearly a decade. For the purposes of the article, the authors focus on a contextual analysis of rules and measures that offers both a broad as well as comprehensive view of the required transparency of lobbying activities and the environment in which decisions are made. In this regard, focusing on the sunshine principles/sunshine rules (not purely limited to laws provides a grasp of the whole issue in a broader context. From a methodological point of view, the exploratory approach was chosen and the coding procedure is mostly dichotomous. As a result, seven key areas with 70 indicators have been identified in terms of transparency of lobbying and decision-making.

  6. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad

    2015-10-11

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\\\_ws\\\\_entSort, and Mult\\\\_ws\\\\_entML are good for both optimization and classification.

  7. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2015-01-01

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\_ws\\_entSort, and Mult\\_ws\\_entML are good for both optimization and classification.

  8. Decision Support System for Optimized Herbicide Dose in Spring Barley

    DEFF Research Database (Denmark)

    Sønderskov, Mette; Kudsk, Per; Mathiassen, Solvejg K

    2014-01-01

    Crop Protection Online (CPO) is a decision support system, which integrates decision algorithms quantifying the requirement for weed control and a herbicide dose model. CPO was designed to be used by advisors and farmers to optimize the choice of herbicide and dose. The recommendations from CPO...... as the Treatment Frequency Index (TFI)) compared to a high level of required weed control. The observations indicated that the current level of weed control required is robust for a range of weed scenarios. Weed plant numbers 3 wk after spraying indicated that the growth of the weed species were inhibited...

  9. Constructing an optimal decision tree for FAST corner point detection

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.

  10. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  11. A Simple Decision Rule for Recognition of Poly(A) Tail Signal Motifs in Human Genome

    KAUST Repository

    AbouEisha, Hassan M.

    2015-05-12

    Background is the numerous attempts were made to predict motifs in genomic sequences that correspond to poly (A) tail signals. Vast portion of this effort has been directed to a plethora of nonlinear classification methods. Even when such approaches yield good discriminant results, identifying dominant features of regulatory mechanisms nevertheless remains a challenge. In this work, we look at decision rules that may help identifying such features. Findings are we present a simple decision rule for classification of candidate poly (A) tail signal motifs in human genomic sequence obtained by evaluating features during the construction of gradient boosted trees. We found that values of a single feature based on the frequency of adenine in the genomic sequence surrounding candidate signal and the number of consecutive adenine molecules in a well-defined region immediately following the motif displays good discriminative potential in classification of poly (A) tail motifs for samples covered by the rule. Conclusions is the resulting simple rule can be used as an efficient filter in construction of more complex poly(A) tail motifs classification algorithms.

  12. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  13. Rules of thumb in life-cycle savings models

    OpenAIRE

    Rodepeter, Ralf; Winter, Joachim

    1999-01-01

    We analyze life-cycle savings decisions when households use simple heuristics, or rules of thumb, rather than solve the underlying intertemporal optimization problem. The decision rules we explore are a simple Keynesian rule where consumption follows income; a simple consumption rule where only a fraction of positive income shocks is saved; a rule that corresponds to the permanent income hypothesis; and two rules that have been found in experimental studies. Using these rules, we simulate lif...

  14. Short-term optimal operation of Three-gorge and Gezhouba cascade hydropower stations in non-flood season with operation rules from data mining

    International Nuclear Information System (INIS)

    Ma Chao; Lian Jijian; Wang Junna

    2013-01-01

    Highlights: ► Short-term optimal operation of Three-gorge and Gezhouba hydropower stations was studied. ► Key state variable and exact constraints were proposed to improve numerical model. ► Operation rules proposed were applied in population initiation step for faster optimization. ► Culture algorithm with difference evolution was selected as optimization method. ► Model and method proposed were verified by case study with feasible operation solutions. - Abstract: Information hidden in the characteristics and relationship data of a cascade hydropower stations can be extracted by data-mining approaches to be operation rules and optimization support information. In this paper, with Three-gorge and Gezhouba cascade hydropower stations as an example, two operation rules are proposed due to different operation efficiency of water turbines and tight water volume and hydraulic relationship between two hydropower stations. The rules are applied to improve optimization model with more exact decision and state variables and constraints. They are also used in the population initiation step to develop better individuals with culture algorithm with differential evolution as an optimization method. In the case study, total feasible population and the best solution based on an initial population with an operation rule can be obtained with a shorter computation time than that of a pure random initiated population. Amount of electricity generation in a dispatch period with an operation rule also increases with an average increase rate of 0.025%. For a fixed water discharge process of Three-gorge hydropower station, there is a better rule to decide an operation plan of Gezhouba hydropower station in which total hydraulic head for electricity generation is optimized and distributed with inner-plant economic operation considered.

  15. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    Science.gov (United States)

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  16. Making the Optimal Decision in Selecting Protective Clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2008-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it applies to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'

  17. Generalized concavity in fuzzy optimization and decision analysis

    CERN Document Server

    Ramík, Jaroslav

    2002-01-01

    Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...

  18. The res judicata rule in jurisdictional decisions of the international Court of justice

    Directory of Open Access Journals (Sweden)

    Kreća Milenko

    2014-01-01

    Full Text Available The author discusses the effects of the res judicata rule as regards jurisdictional decisions of the International Court of Justice. He finds that there exists a special position of a judgment on preliminary objection in respect to both aspects of the res judicata rule - its binding force and finality. A perception of distinct relativity of a jurisdictional decision of the Court, expressing its interlocatory character pervades, in his opinion, the body of law regulating the Court's activity. Preliminary objections as such do not exhaust objections to the jurisdiction of the Court, as evidenced by non-preliminary objections to the jurisdiction of the Court giving rise to the application of the principle compétence de la compétence understood in the narrow sense. With regard to the binding force of a judgment on preliminary objections, it does not create legal obligations stricto sensu. The author finds that the relative character of jurisdictional decisions of the Court as compared with a judgment on the merits is justified on a number of grounds.

  19. Integrated decision making for the optimal bioethanol supply chain

    International Nuclear Information System (INIS)

    Corsano, Gabriela; Fumero, Yanina; Montagna, Jorge M.

    2014-01-01

    Highlights: • Optimal allocation, design and production planning of integrated ethanol plants is considered. • Mixed Integer Programming model is presented for solving the integration problem. • Different tradeoffs can be assessed and analyzed. • The modeling framework represents an useful tool for guiding decision making. - Abstract: Bioethanol production poses different challenges that require an integrated approach. Usually previous works have focused on specific perspectives of the global problem. On the contrary, bioethanol, in particular, and biofuels, in general, requires an integrated decision making framework that takes into account the needs and concerns of the different members involved in its supply chain. In this work, a Mixed Integer Linear Programming (MILP) model for the optimal allocation, design and production planning of integrated ethanol/yeast plants is considered. The proposed formulation addresses the relations between different aspects of the bioethanol supply chain and provides an efficient tool to assess the global operation of the supply chain taking into account different points of view. The model proposed in this work simultaneously determines the structure of a three-echelon supply chain (raw material sites, production facilities and customer zones), the design of each installed plant and operational considerations through production campaigns. Yeast production is considered in order to reduce the negative environmental impact caused by bioethanol residues. Several cases are presented in order to assess the approach capabilities and to evaluate the tradeoffs among all the decisions

  20. Diagnostic accuracy of clinical decision rules to exclude fractures in acute ankle injuries : systematic review and meta-analysis

    NARCIS (Netherlands)

    Barelds, Ingrid; Krijnen, Wim P; van de Leur, Johannes P; van der Schans, Cees P; Goddard, Robert J

    BACKGROUND: Ankle decision rules are developed to expedite patient care and reduce the number of radiographs of the ankle and foot. Currently, only three systematic reviews have been conducted on the accuracy of the Ottawa Ankle and Foot Rules (OAFR) in adults and children. However, no systematic

  1. Optimizing Fuzzy Rule Base for Illumination Compensation in Face Recognition using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Bima Sena Bayu Dewantara

    2014-12-01

    Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm

  2. Decision Support Model for Optimal Management of Coastal Gate

    Science.gov (United States)

    Ditthakit, Pakorn; Chittaladakorn, Suwatana

    2010-05-01

    The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.

  3. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael

    2016-03-14

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  4. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2016-01-01

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  5. When none of us perform better than all of us together: the role of analogical decision rules in groups.

    Directory of Open Access Journals (Sweden)

    Nicoleta Meslec

    Full Text Available During social interactions, groups develop collective competencies that (ideally should assist groups to outperform average standalone individual members (weak cognitive synergy or the best performing member in the group (strong cognitive synergy. In two experimental studies we manipulate the type of decision rule used in group decision-making (identify the best vs. collaborative, and the way in which the decision rules are induced (direct vs. analogical and we test the effect of these two manipulations on the emergence of strong and weak cognitive synergy. Our most important results indicate that an analogically induced decision rule (imitate-the-successful heuristic in which groups have to identify the best member and build on his/her performance (take-the-best heuristic is the most conducive for strong cognitive synergy. Our studies bring evidence for the role of analogy-making in groups as well as the role of fast-and-frugal heuristics for group decision-making.

  6. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  7. Mate choice when males are in patches: optimal strategies and good rules of thumb.

    Science.gov (United States)

    Hutchinson, John M C; Halupka, Konrad

    2004-11-07

    In standard mate-choice models, females encounter males sequentially and decide whether to inspect the quality of another male or to accept a male already inspected. What changes when males are clumped in patches and there is a significant cost to travel between patches? We use stochastic dynamic programming to derive optimum strategies under various assumptions. With zero costs to returning to a male in the current patch, the optimal strategy accepts males above a quality threshold which is constant whenever one or more males in the patch remain uninspected; this threshold drops when inspecting the last male in the patch, so returns may occur only then and are never to a male in a previously inspected patch. With non-zero within-patch return costs, such a two-threshold rule still performs extremely well, but a more gradual decline in acceptance threshold is optimal. Inability to return at all need not decrease performance by much. The acceptance threshold should also decline if it gets harder to discover the last males in a patch. Optimal strategies become more complex when mean male quality varies systematically between patches or years, and females estimate this in a Bayesian manner through inspecting male qualities. It can then be optimal to switch patch before inspecting all males on a patch, or, exceptionally, to return to an earlier patch. We compare performance of various rules of thumb in these environments and in ones without a patch structure. A two-threshold rule performs excellently, as do various simplifications of it. The best-of-N rule outperforms threshold rules only in non-patchy environments with between-year quality variation. The cutoff rule performs poorly.

  8. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  9. Optimizing perioperative decision making: improved information for clinical workflow planning.

    Science.gov (United States)

    Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.

  10. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    Science.gov (United States)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  11. Optimal decision making and matching are tied through diminishing returns.

    Science.gov (United States)

    Kubanek, Jan

    2017-08-08

    How individuals make decisions has been a matter of long-standing debate among economists and researchers in the life sciences. In economics, subjects are viewed as optimal decision makers who maximize their overall reward income. This framework has been widely influential, but requires a complete knowledge of the reward contingencies associated with a given choice situation. Psychologists and ecologists have observed that individuals tend to use a simpler "matching" strategy, distributing their behavior in proportion to relative rewards associated with their options. This article demonstrates that the two dominant frameworks of choice behavior are linked through the law of diminishing returns. The relatively simple matching can in fact provide maximal reward when the rewards associated with decision makers' options saturate with the invested effort. Such saturating relationships between reward and effort are hallmarks of the law of diminishing returns. Given the prevalence of diminishing returns in nature and social settings, this finding can explain why humans and animals so commonly behave according to the matching law. The article underscores the importance of the law of diminishing returns in choice behavior.

  12. A diagnosis-based clinical decision rule for spinal pain part 2: review of the literature

    Directory of Open Access Journals (Sweden)

    Hurwitz Eric L

    2008-08-01

    Full Text Available Abstract Background Spinal pain is a common and often disabling problem. The research on various treatments for spinal pain has, for the most part, suggested that while several interventions have demonstrated mild to moderate short-term benefit, no single treatment has a major impact on either pain or disability. There is great need for more accurate diagnosis in patients with spinal pain. In a previous paper, the theoretical model of a diagnosis-based clinical decision rule was presented. The approach is designed to provide the clinician with a strategy for arriving at a specific working diagnosis from which treatment decisions can be made. It is based on three questions of diagnosis. In the current paper, the literature on the reliability and validity of the assessment procedures that are included in the diagnosis-based clinical decision rule is presented. Methods The databases of Medline, Cinahl, Embase and MANTIS were searched for studies that evaluated the reliability and validity of clinic-based diagnostic procedures for patients with spinal pain that have relevance for questions 2 (which investigates characteristics of the pain source and 3 (which investigates perpetuating factors of the pain experience. In addition, the reference list of identified papers and authors' libraries were searched. Results A total of 1769 articles were retrieved, of which 138 were deemed relevant. Fifty-one studies related to reliability and 76 related to validity. One study evaluated both reliability and validity. Conclusion Regarding some aspects of the DBCDR, there are a number of studies that allow the clinician to have a reasonable degree of confidence in his or her findings. This is particularly true for centralization signs, neurodynamic signs and psychological perpetuating factors. There are other aspects of the DBCDR in which a lesser degree of confidence is warranted, and in which further research is needed.

  13. Justification, optimization and decision-aiding in existing exposure situations

    International Nuclear Information System (INIS)

    Hedemann-Jensen, Per

    2004-01-01

    The existing ICRP system of radiological protection from 1990 (ICRP Publication 60) can be seen as a binary or dual-line system dealing with protection in exposure situations categorized as either practices or interventions. The distinction between practices and interventions is summarized in the paper with focus on some of the problems experienced in making such a distinction. The protection principles within the existing system of protection are presented with emphasis on the application to de facto or existing exposure situations. Decision on countermeasures to mitigate the consequences of existing exposure situations such as nuclear or radiological accidents and naturally occurring exposure situations include factors or attributes describing benefits from the countermeasure and those describing harm. Some of these attributes are discussed and the general process of justification of intervention and optimization of protection arriving at generic reference levels for implementing protective measures is presented. In addition, the role of radiological protection professionals and other stakeholders in the decision-making process is discussed. Special attention is given to the question whether radiological protection should form only one of many decision-aiding inputs to a broader societal decision-making process or whether societal aspects should be fully integrated into the radiological protection framework. The concepts of practices and interventions, however logical they are, have created some confusion when applied to protection of the public following a nuclear or radiological accident. These problems may be solved in a new set of general ICRP recommendations on radiological protection, which are anticipated to supersede Publication 60 in 2005. The evolution of the basic ICRP principles for radiological protection in existing exposure situations into a new set of ICRP recommendations is briefly discussed based upon the various material that has been presented

  14. Sensitivity study on heuristic rules applied to the neutronic optimization of cells for BWR

    International Nuclear Information System (INIS)

    Gonzalez C, J.; Martin del Campo M, C.; Francois L, J.L.

    2004-01-01

    The objective of this work is to verify the validity of the heuristic rules that have been applied in the processes of radial optimization of fuel cells. It was examined the rule with respect to the accommodation of fuel in the corners of the cell and it became special attention on the influence of the position and concentration of those pellets with gadolinium in the reactivity of the cell and the safety parameters. The evaluation behaved on designed cells violating the heuristic rules. For both cases the cells were analyzed between infinite using the HELIOS code. Additionally, for the second case, it was behaved a stage more exhaustive where it was used one of the studied cells that it completed those safety parameters and of reactivity to generate the design of an assemble that was used to calculate with CM-PRESTO the behavior of the nucleus during three operation cycles. (Author)

  15. Investigation of Multi-Criteria Decision Consistency: A Triplex Approach to Optimal Oilfield Portfolio Investment Decisions

    Science.gov (United States)

    Qaradaghi, Mohammed

    Complexity of the capital intensive oil and gas portfolio investments is continuously growing. It is manifested in the constant increase in the type, number and degree of risks and uncertainties, which consequently lead to more challenging decision making problems. A typical complex decision making problem in petroleum exploration and production (E&P) is the selection and prioritization of oilfields/projects in a portfolio investment. Prioritizing oilfields maybe required for different purposes, including the achievement of a targeted production and allocation of limited available development resources. These resources cannot be distributed evenly nor can they be allocated based on the oilfield size or production capacity alone since various other factors need to be considered simultaneously. These factors may include subsurface complexity, size of reservoir, plateau production and needed infrastructure in addition to other issues of strategic concern, such as socio-economic, environmental and fiscal policies, particularly when the decision making involves governments or national oil companies. Therefore, it would be imperative to employ decision aiding tools that not only address these factors, but also incorporate the decision makers' preferences clearly and accurately. However, the tools commonly used in project portfolio selection and optimization, including intuitive approaches, vary in their focus and strength in addressing the different criteria involved in such decision problems. They are also disadvantaged by a number of drawbacks, which may include lacking the capacity to address multiple and interrelated criteria, uncertainty and risk, project relationship with regard to value contribution and optimum resource utilization, non-monetary attributes, decision maker's knowledge and expertise, in addition to varying levels of ease of use and other practical and theoretical drawbacks. These drawbacks have motivated researchers to investigate other tools and

  16. Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices

    Directory of Open Access Journals (Sweden)

    Andrea Baiocchi

    2015-01-01

    Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.

  17. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    Science.gov (United States)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  18. How accurate are interpretations of curriculum-based measurement progress monitoring data? Visual analysis versus decision rules.

    Science.gov (United States)

    Van Norman, Ethan R; Christ, Theodore J

    2016-10-01

    Curriculum based measurement of oral reading (CBM-R) is used to monitor the effects of academic interventions for individual students. Decisions to continue, modify, or terminate these interventions are made by interpreting time series CBM-R data. Such interpretation is founded upon visual analysis or the application of decision rules. The purpose of this study was to compare the accuracy of visual analysis and decision rules. Visual analysts interpreted 108 CBM-R progress monitoring graphs one of three ways: (a) without graphic aids, (b) with a goal line, or (c) with a goal line and a trend line. Graphs differed along three dimensions, including trend magnitude, variability of observations, and duration of data collection. Automated trend line and data point decision rules were also applied to each graph. Inferential analyses permitted the estimation of the probability of a correct decision (i.e., the student is improving - continue the intervention, or the student is not improving - discontinue the intervention) for each evaluation method as a function of trend magnitude, variability of observations, and duration of data collection. All evaluation methods performed better when students made adequate progress. Visual analysis and decision rules performed similarly when observations were less variable. Results suggest that educators should collect data for more than six weeks, take steps to control measurement error, and visually analyze graphs when data are variable. Implications for practice and research are discussed. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  19. Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.

    Science.gov (United States)

    Liu, Siqi; Wright, Adam; Hauskrecht, Milos

    2017-06-01

    A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.

  20. Comparing two socially optimal work allocation rules when having a profit optimizing subcontractor with ample capacity

    DEFF Research Database (Denmark)

    Larsen, Christian

    2005-01-01

    We study a service system modelled as a single server queuing system where request for service either can be processed at the service system or by a subcontractor. In the former case the customer is incurred waiting costs but the service is free, while in the latter case the customer must pay...... for the service but there is no waiting time, hence no waiting costs. Under the premises that the subcontractor prices his services in order to maximize his profit, we study two work allocation rules, which given the price of subcontractor seek to allocate work such that the costs of the customers are minimized...

  1. Comparing two socially optimal work allocation rules when having a profit optimizing subcontractor with ample capacity

    DEFF Research Database (Denmark)

    Larsen, Christian

    2003-01-01

    We study a service system modelled as a single server queueing system where request for service either can be processed at the service system or by a subcontractor. In the former case the customer is incurred waiting costs but the service is free, while in the latter case the customer must pay...... for the service but there is no waiting time, hence no waiting costs. Under the premises that the subcontractor prices his services in order to maximize his profit, we study two work allocation rules, which given the price of the subcontractor seek to allocate work such that the costs of the customers...

  2. Comparing two socially optimal work allocation rules when having a profit optimizing subcontractor with ample capacity

    DEFF Research Database (Denmark)

    Larsen, Christian

    We study a service system modelled as a single server queueing system where requests for service either can be processed at the service system or by a subcontractor. In the former case the subcontractor is incurred waiting costs but the service is free, while in the latter case the customer must...... pay for the service but there is no waiting time, hence no waiting costs. Under the premises that the subcontractor prices his services in order to maximize his profit, we study two work allocation rules, which given the príce of the subcontractor seek to allocate work such that the costs...

  3. Health Cost Risk and Optimal Retirement Provision : A Simple Rule for Annuity Demand

    NARCIS (Netherlands)

    Peijnenburg, J.M.J.; Nijman, T.E.; Werker, B.J.M.

    2010-01-01

    We analyze the effect of health cost risk on optimal annuity demand and consumption/savings decisions. Many retirees are exposed to sizeable out-of-pocket medical expenses, while annuities potentially impair the ability to get liquidity to cover these costs and smooth consumption. We find that if

  4. The triangular density to approximate the normal density: decision rules-of-thumb

    International Nuclear Information System (INIS)

    Scherer, William T.; Pomroy, Thomas A.; Fuller, Douglas N.

    2003-01-01

    In this paper we explore the approximation of the normal density function with the triangular density function, a density function that has extensive use in risk analysis. Such an approximation generates a simple piecewise-linear density function and a piecewise-quadratic distribution function that can be easily manipulated mathematically and that produces surprisingly accurate performance under many instances. This mathematical tractability proves useful when it enables closed-form solutions not otherwise possible, as with problems involving the embedded use of the normal density. For benchmarking purposes we compare the basic triangular approximation with two flared triangular distributions and with two simple uniform approximations; however, throughout the paper our focus is on using the triangular density to approximate the normal for reasons of parsimony. We also investigate the logical extensions of using a non-symmetric triangular density to approximate a lognormal density. Several issues associated with using a triangular density as a substitute for the normal and lognormal densities are discussed, and we explore the resulting numerical approximation errors for the normal case. Finally, we present several examples that highlight simple decision rules-of-thumb that the use of the approximation generates. Such rules-of-thumb, which are useful in risk and reliability analysis and general business analysis, can be difficult or impossible to extract without the use of approximations. These examples include uses of the approximation in generating random deviates, uses in mixture models for risk analysis, and an illustrative decision analysis problem. It is our belief that this exploratory look at the triangular approximation to the normal will provoke other practitioners to explore its possible use in various domains and applications

  5. Optimal decisions of countries with carbon tax and carbon tariff

    Directory of Open Access Journals (Sweden)

    Yumei Hou

    2015-05-01

    Full Text Available Purpose: Reducing carbon emission has been the core problem of controlling global warming and climate deterioration recently. This paper focuses on the optimal carbon taxation policy levied by countries and the impact on firms’ optimal production decisions. Design/methodology/approach: This paper uses a two-stage game theory model to analyze the impact of carbon tariff and tax. Numerical simulation is used to supplement the theoretical analysis. Findings: Results derived from the paper indicate that the demand in an unstable market is significantly affected by environmental damage level. Carbon tariff is a policy-oriented tax while the carbon tax is a market-oriented one. Comprehensive carbon taxation policy benefit developed countries and basic policy is more suitable for developing countries. Research limitations/implications: In this research, we do not consider random demand and asymmetric information, which may not well suited the reality. Originality/value: This work provides a different perspective in analyzing the impact of carbon tax and tariff. It is the first study to consider two consuming market and the strategic game between two countries. Different international status of countries considered in the paper is also a unique point.

  6. Microseismic Monitoring Design Optimization Based on Multiple Criteria Decision Analysis

    Science.gov (United States)

    Kovaleva, Y.; Tamimi, N.; Ostadhassan, M.

    2017-12-01

    Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.

  7. Updating Optimal Decisions Using Game Theory and Exploring Risk Behavior Through Response Surface Methodology

    National Research Council Canada - National Science Library

    Jordan, Jeremy D

    2007-01-01

    .... Methodology is developed that allows a decision maker to change his perceived optimal policy based on available knowledge of the opponents strategy, where the opponent is a rational decision maker...

  8. Non-ad-hoc decision rule for the Dempster-Shafer method of evidential reasoning

    Science.gov (United States)

    Cheaito, Ali; Lecours, Michael; Bosse, Eloi

    1998-03-01

    This paper is concerned with the fusion of identity information through the use of statistical analysis rooted in Dempster-Shafer theory of evidence to provide automatic identification aboard a platform. An identity information process for a baseline Multi-Source Data Fusion (MSDF) system is defined. The MSDF system is applied to information sources which include a number of radars, IFF systems, an ESM system, and a remote track source. We use a comprehensive Platform Data Base (PDB) containing all the possible identity values that the potential target may take, and we use the fuzzy logic strategies which enable the fusion of subjective attribute information from sensor and the PDB to make the derivation of target identity more quickly, more precisely, and with statistically quantifiable measures of confidence. The conventional Dempster-Shafer lacks a formal basis upon which decision can be made in the face of ambiguity. We define a non-ad hoc decision rule based on the expected utility interval for pruning the `unessential' propositions which would otherwise overload the real-time data fusion systems. An example has been selected to demonstrate the implementation of our modified Dempster-Shafer method of evidential reasoning.

  9. A clinical decision rule for the use of plain radiography in children after acute wrist injury: development and external validation of the Amsterdam Pediatric Wrist Rules

    Energy Technology Data Exchange (ETDEWEB)

    Slaar, Annelie; Maas, Mario; Rijn, Rick R. van [University of Amsterdam, Department of Radiology, Academic Medical Centre, Meibergdreef 9, 1105, AZ, Amsterdam (Netherlands); Walenkamp, Monique M.J.; Bentohami, Abdelali; Goslings, J.C. [University of Amsterdam, Trauma Unit, Department of Surgery, Academic Medical Centre, Amsterdam (Netherlands); Steyerberg, Ewout W. [Erasmus MC - University Medical Centre, Department of Public Health, Rotterdam (Netherlands); Jager, L.C. [University of Amsterdam, Emergency Department, Academic Medical Centre, Amsterdam (Netherlands); Sosef, Nico L. [Spaarne Hospital, Department of Surgery, Hoofddorp (Netherlands); Velde, Romuald van [Tergooi Hospitals, Department of Surgery, Hilversum (Netherlands); Ultee, Jan M. [Sint Lucas Andreas Hospital, Department of Surgery, Amsterdam (Netherlands); Schep, Niels W.L. [University of Amsterdam, Trauma Unit, Department of Surgery, Academic Medical Centre, Amsterdam (Netherlands); Maasstadziekenhuis Rotterdam, Department of Surgery, Rotterdam (Netherlands)

    2016-01-15

    In most hospitals, children with acute wrist trauma are routinely referred for radiography. To develop and validate a clinical decision rule to decide whether radiography in children with wrist trauma is required. We prospectively developed and validated a clinical decision rule in two study populations. All children who presented in the emergency department of four hospitals with pain following wrist trauma were included and evaluated for 18 clinical variables. The outcome was a wrist fracture diagnosed by plain radiography. Included in the study were 787 children. The prediction model consisted of six variables: age, swelling of the distal radius, visible deformation, distal radius tender to palpation, anatomical snuffbox tender to palpation, and painful or abnormal supination. The model showed an area under the receiver operator characteristics curve of 0.79 (95% CI: 0.76-0.83). The sensitivity and specificity were 95.9% and 37.3%, respectively. The use of this model would have resulted in a 22% absolute reduction of radiographic examinations. In a validation study, 7/170 fractures (4.1%, 95% CI: 1.7-8.3%) would have been missed using the decision model. The decision model may be a valuable tool to decide whether radiography in children after wrist trauma is required. (orig.)

  10. A clinical decision rule for the use of plain radiography in children after acute wrist injury: development and external validation of the Amsterdam Pediatric Wrist Rules

    International Nuclear Information System (INIS)

    Slaar, Annelie; Maas, Mario; Rijn, Rick R. van; Walenkamp, Monique M.J.; Bentohami, Abdelali; Goslings, J.C.; Steyerberg, Ewout W.; Jager, L.C.; Sosef, Nico L.; Velde, Romuald van; Ultee, Jan M.; Schep, Niels W.L.

    2016-01-01

    In most hospitals, children with acute wrist trauma are routinely referred for radiography. To develop and validate a clinical decision rule to decide whether radiography in children with wrist trauma is required. We prospectively developed and validated a clinical decision rule in two study populations. All children who presented in the emergency department of four hospitals with pain following wrist trauma were included and evaluated for 18 clinical variables. The outcome was a wrist fracture diagnosed by plain radiography. Included in the study were 787 children. The prediction model consisted of six variables: age, swelling of the distal radius, visible deformation, distal radius tender to palpation, anatomical snuffbox tender to palpation, and painful or abnormal supination. The model showed an area under the receiver operator characteristics curve of 0.79 (95% CI: 0.76-0.83). The sensitivity and specificity were 95.9% and 37.3%, respectively. The use of this model would have resulted in a 22% absolute reduction of radiographic examinations. In a validation study, 7/170 fractures (4.1%, 95% CI: 1.7-8.3%) would have been missed using the decision model. The decision model may be a valuable tool to decide whether radiography in children after wrist trauma is required. (orig.)

  11. A decision support system and rule-based algorithm to augment the human interpretation of the 12-lead electrocardiogram.

    Science.gov (United States)

    Cairns, Andrew W; Bond, Raymond R; Finlay, Dewar D; Guldenring, Daniel; Badilini, Fabio; Libretti, Guido; Peace, Aaron J; Leslie, Stephen J

    The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. To improve interpretation accuracy and reduce missed co-abnormalities. The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct

  12. Approximating optimal behavioural strategies down to rules-of-thumb: energy reserve changes in pairs of social foragers.

    Directory of Open Access Journals (Sweden)

    Sean A Rands

    Full Text Available Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.

  13. Approximating optimal behavioural strategies down to rules-of-thumb: energy reserve changes in pairs of social foragers.

    Science.gov (United States)

    Rands, Sean A

    2011-01-01

    Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.

  14. Research on the decision-making model of land-use spatial optimization

    Science.gov (United States)

    He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu

    2009-10-01

    Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.

  15. Improving the anesthetic process by a fuzzy rule based medical decision system.

    Science.gov (United States)

    Mendez, Juan Albino; Leon, Ana; Marrero, Ayoze; Gonzalez-Cava, Jose M; Reboso, Jose Antonio; Estevez, Jose Ignacio; Gomez-Gonzalez, José F

    2018-01-01

    The main objective of this research is the design and implementation of a new fuzzy logic tool for automatic drug delivery in patients undergoing general anesthesia. The aim is to adjust the drug dose to the real patient needs using heuristic knowledge provided by clinicians. A two-level computer decision system is proposed. The idea is to release the clinician from routine tasks so that he can focus on other variables of the patient. The controller uses the Bispectral Index (BIS) to assess the hypnotic state of the patient. Fuzzy controller was included in a closed-loop system to reach the BIS target and reject disturbances. BIS was measured using a BIS VISTA monitor, a device capable of calculating the hypnosis level of the patient through EEG information. An infusion pump with propofol 1% is used to supply the drug to the patient. The inputs to the fuzzy inference system are BIS error and BIS rate. The output is infusion rate increment. The mapping of the input information and the appropriate output is given by a rule-base based on knowledge of clinicians. To evaluate the performance of the fuzzy closed-loop system proposed, an observational study was carried out. Eighty one patients scheduled for ambulatory surgery were randomly distributed in 2 groups: one group using a fuzzy logic based closed-loop system (FCL) to automate the administration of propofol (42 cases); the second group using manual delivering of the drug (39 cases). In both groups, the BIS target was 50. The FCL, designed with intuitive logic rules based on the clinician experience, performed satisfactorily and outperformed the manual administration in patients in terms of accuracy through the maintenance stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Combining two strategies to optimize biometric decisions against spoofing attacks

    Science.gov (United States)

    Li, Weifeng; Poh, Norman; Zhou, Yicong

    2014-09-01

    Spoof attack by replicating biometric traits represents a real threat to an automatic biometric verification/ authentication system. This is because the system, originally designed to distinguish between genuine users from impostors, simply cannot distinguish between a replicated biometric sample (replica) from a live sample. An effective solution is to obtain some measures that can indicate whether or not a biometric trait has been tempered with, e.g., liveness detection measures. These measures are referred to as evidence of spoofing or anti-spoofing measures. In order to make the final accept/rejection decision, a straightforward solution to define two thresholds: one for the anti-spoofing measure, and another for the verification score. We compared two variants of a method that relies on applying two thresholds - one to the verification (matching) score and another to the anti-spoofing measure. Our experiments carried out using a signature database as well as by simulation show that both the brute-force and its probabilistic variant turn out to be optimal under different operating conditions.

  17. Making the optimal decision in selecting protective clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2007-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  18. Termination of resuscitation in the prehospital setting: A comparison of decisions in clinical practice vs. recommendations of a termination rule

    NARCIS (Netherlands)

    Verhaert, D.V.; Bonnes, J.L.; Nas, J.; Keuper, W.; Grunsven, P.M. van; Smeets, J.L.; Boer, M.J. de; Brouwer, M.A.

    2016-01-01

    BACKGROUND: Of the proposed algorithms that provide guidance for in-field termination of resuscitation (TOR) decisions, the guidelines for cardiopulmonary resuscitation (CPR) refer to the basic and advanced life support (ALS)-TOR rules. To assess the potential consequences of implementation of the

  19. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    Science.gov (United States)

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Verification and synthesis of optimal decision strategies for complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Summers, S. J.

    2013-07-01

    Complex systems make a habit of disagreeing with the mathematical models strategically designed to capture their behavior. A recursive process ensues where data is used to gain insight into the disagreement. A simple model may give way to a model with hybrid dynamics. A deterministic model may give way to a model with stochastic dynamics. In many cases, the modeling framework that sufficiently characterises the system is both hybrid and stochastic; these systems are referred to as stochastic hybrid systems. This dissertation considers the stochastic hybrid system framework for modeling complex systems and provides mathematical methods for analysing, and synthesizing decision laws for, such systems. We first propose a stochastic reach-avoid problem for discrete time stochastic hybrid systems. In particular, we present a dynamic programming based solution to a probabilistic reach-avoid problem for a controlled discrete time stochastic hybrid system. We address two distinct interpretations of the reach-avoid problem via stochastic optimal control. In the first case, a sum-multiplicative cost function is introduced along with a corresponding dynamic recursion that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an unsafe set at all preceding time steps. In the second case, we introduce a multiplicative cost function and a dynamic recursion that quantifies the probability of hitting a target set at the terminal time, while avoiding an unsafe set at all preceding time steps. In each case, optimal reach-avoid control policies are derived as the solution to an optimal control problem via dynamic programming. We next introduce an extension of the reach-avoid problem where we consider the verification of discrete time stochastic hybrid systems when there exists uncertainty in the reachability specifications themselves. A sum multiplicative cost function is introduced along with a corresponding dynamic recursion

  1. Verification and synthesis of optimal decision strategies for complex systems

    International Nuclear Information System (INIS)

    Summers, S. J.

    2013-01-01

    Complex systems make a habit of disagreeing with the mathematical models strategically designed to capture their behavior. A recursive process ensues where data is used to gain insight into the disagreement. A simple model may give way to a model with hybrid dynamics. A deterministic model may give way to a model with stochastic dynamics. In many cases, the modeling framework that sufficiently characterises the system is both hybrid and stochastic; these systems are referred to as stochastic hybrid systems. This dissertation considers the stochastic hybrid system framework for modeling complex systems and provides mathematical methods for analysing, and synthesizing decision laws for, such systems. We first propose a stochastic reach-avoid problem for discrete time stochastic hybrid systems. In particular, we present a dynamic programming based solution to a probabilistic reach-avoid problem for a controlled discrete time stochastic hybrid system. We address two distinct interpretations of the reach-avoid problem via stochastic optimal control. In the first case, a sum-multiplicative cost function is introduced along with a corresponding dynamic recursion that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an unsafe set at all preceding time steps. In the second case, we introduce a multiplicative cost function and a dynamic recursion that quantifies the probability of hitting a target set at the terminal time, while avoiding an unsafe set at all preceding time steps. In each case, optimal reach-avoid control policies are derived as the solution to an optimal control problem via dynamic programming. We next introduce an extension of the reach-avoid problem where we consider the verification of discrete time stochastic hybrid systems when there exists uncertainty in the reachability specifications themselves. A sum multiplicative cost function is introduced along with a corresponding dynamic recursion

  2. Optimal Waste Load Allocation Using Multi-Objective Optimization and Multi-Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    L. Saberi

    2016-10-01

    Full Text Available Introduction: Increasing demand for water, depletion of resources of acceptable quality, and excessive water pollution due to agricultural and industrial developments has caused intensive social and environmental problems all over the world. Given the environmental importance of rivers, complexity and extent of pollution factors and physical, chemical and biological processes in these systems, optimal waste-load allocation in river systems has been given considerable attention in the literature in the past decades. The overall objective of planning and quality management of river systems is to develop and implement a coordinated set of strategies and policies to reduce or allocate of pollution entering the rivers so that the water quality matches by proposing environmental standards with an acceptable reliability. In such matters, often there are several different decision makers with different utilities which lead to conflicts. Methods/Materials: In this research, a conflict resolution framework for optimal waste load allocation in river systems is proposed, considering the total treatment cost and the Biological Oxygen Demand (BOD violation characteristics. There are two decision-makers inclusive waste load discharges coalition and environmentalists who have conflicting objectives. This framework consists of an embedded river water quality simulator, which simulates the transport process including reaction kinetics. The trade-off curve between objectives is obtained using the Multi-objective Particle Swarm Optimization Algorithm which these objectives are minimization of the total cost of treatment and penalties that must be paid by discharges and a violation of water quality standards considering BOD parameter which is controlled by environmentalists. Thus, the basic policy of river’s water quality management is formulated in such a way that the decision-makers are ensured their benefits will be provided as far as possible. By using MOPSO

  3. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  4. Decision support system for triage management: A hybrid approach using rule-based reasoning and fuzzy logic.

    Science.gov (United States)

    Dehghani Soufi, Mahsa; Samad-Soltani, Taha; Shams Vahdati, Samad; Rezaei-Hachesu, Peyman

    2018-06-01

    Fast and accurate patient triage for the response process is a critical first step in emergency situations. This process is often performed using a paper-based mode, which intensifies workload and difficulty, wastes time, and is at risk of human errors. This study aims to design and evaluate a decision support system (DSS) to determine the triage level. A combination of the Rule-Based Reasoning (RBR) and Fuzzy Logic Classifier (FLC) approaches were used to predict the triage level of patients according to the triage specialist's opinions and Emergency Severity Index (ESI) guidelines. RBR was applied for modeling the first to fourth decision points of the ESI algorithm. The data relating to vital signs were used as input variables and modeled using fuzzy logic. Narrative knowledge was converted to If-Then rules using XML. The extracted rules were then used to create the rule-based engine and predict the triage levels. Fourteen RBR and 27 fuzzy rules were extracted and used in the rule-based engine. The performance of the system was evaluated using three methods with real triage data. The accuracy of the clinical decision support systems (CDSSs; in the test data) was 99.44%. The evaluation of the error rate revealed that, when using the traditional method, 13.4% of the patients were miss-triaged, which is statically significant. The completeness of the documentation also improved from 76.72% to 98.5%. Designed system was effective in determining the triage level of patients and it proved helpful for nurses as they made decisions, generated nursing diagnoses based on triage guidelines. The hybrid approach can reduce triage misdiagnosis in a highly accurate manner and improve the triage outcomes. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Heuristic rules embedded genetic algorithm to solve VVER loading pattern optimization problem

    International Nuclear Information System (INIS)

    Fatih, Alim; Kostandi, Ivanov

    2006-01-01

    Full text: Loading Pattern (LP) optimization is one of the most important aspects of the operation of nuclear reactors. A genetic algorithm (GA) code GARCO (Genetic Algorithm Reactor Optimization Code) has been developed with embedded heuristic techniques to perform optimization calculations for in-core fuel management tasks. GARCO is a practical tool that includes a unique methodology applicable for all types of Pressurized Water Reactor (PWR) cores having different geometries with an unlimited number of FA types in the inventory. GARCO was developed by modifying the classical representation of the genotype. Both the genotype representation and the basic algorithm have been modified to incorporate the in-core fuel management heuristics rules so as to obtain the best results in a shorter time. GARCO has three modes. Mode 1 optimizes the locations of the fuel assemblies (FAs) in the nuclear reactor core, Mode 2 optimizes the placement of the burnable poisons (BPs) in a selected LP, and Mode 3 optimizes simultaneously both the LP and the BP placement in the core. This study describes the basic algorithm for Mode 1. The GARCO code is applied to the VVER-1000 reactor hexagonal geometry core in this study. The M oby-Dick i s used as reactor physics code to deplete FAs in the core. It was developed to analyze the VVER reactors by SKODA Inc. To use these rules for creating the initial population with GA operators, the worth definition application is developed. Each FA has a worth value for each location. This worth is between 0 and 1. If worth of any FA for a location is larger than 0.5, this FA in this location is a good choice. When creating the initial population of LPs, a subroutine provides a percent of individuals, which have genes with higher than the 0.5 worth. The percentage of the population to be created without using worth definition is defined in the GARCO input. And also age concept has been developed to accelerate the GA calculation process in reaching the

  6. The rule of nuclear power in the base-load portfolio optimization process

    International Nuclear Information System (INIS)

    Desiata, L.; D'Alberti, F.

    2007-01-01

    The pursuit of optimal portfolios, maximizing long-term profitability, is the main strategic challenge faced by electricity producers nowadays. Investment decisions, worth billions of euros, are affected by spot factors (such as current fuel prices volatility) that often lead to unbalanced generation mixes. Our analysis presents a statistical-financial approach that highlights the role of nuclear within the base-load portfolio optimisation process [it

  7. Evolutionary Artificial Neural Network Weight Tuning to Optimize Decision Making for an Abstract Game

    Science.gov (United States)

    2010-03-01

    EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT...AFIT/GCS/ENG/10-06 EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT GAME THESIS Presented...35 14: Diagram of pLoGANN’s Artificial Neural Network and

  8. Modified Dempster-Shafer approach using an expected utility interval decision rule

    Science.gov (United States)

    Cheaito, Ali; Lecours, Michael; Bosse, Eloi

    1999-03-01

    The combination operation of the conventional Dempster- Shafer algorithm has a tendency to increase exponentially the number of propositions involved in bodies of evidence by creating new ones. The aim of this paper is to explore a 'modified Dempster-Shafer' approach of fusing identity declarations emanating form different sources which include a number of radars, IFF and ESM systems in order to limit the explosion of the number of propositions. We use a non-ad hoc decision rule based on the expected utility interval to select the most probable object in a comprehensive Platform Data Base containing all the possible identity values that a potential target may take. We study the effect of the redistribution of the confidence levels of the eliminated propositions which otherwise overload the real-time data fusion system; these eliminated confidence levels can in particular be assigned to ignorance, or uniformly added to the remaining propositions and to ignorance. A scenario has been selected to demonstrate the performance of our modified Dempster-Shafer method of evidential reasoning.

  9. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems.

    Science.gov (United States)

    DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.

  10. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems

    Science.gov (United States)

    DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846

  11. A case study of optimization in the decision process: Siting groundwater monitoring wells

    International Nuclear Information System (INIS)

    Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.

    1993-12-01

    Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker

  12. Risk-Informed Decisions Optimization in Inspection and Maintenance

    International Nuclear Information System (INIS)

    Robertas Alzbutas

    2002-01-01

    The Risk-Informed Approach (RIA) used to support decisions related to inspection and maintenance program is considered. The use of risk-informed methods can help focus the adequate in-service inspections and control on the more important locations of complex dynamic systems. The focus is set on the highest risk measured as conditional core damage frequency, which is produced by the frequencies of degradation and final failure at different locations combined with the conditional failure consequence probability. The probabilities of different degradation states per year and consequences are estimated quantitatively. The investigation of inspection and maintenance process is presented as the combination of deterministic and probabilistic analysis based on general risk-informed model, which includes the inspection and maintenance program features. Such RIA allows an optimization of inspection program while maintaining probabilistic and fundamental deterministic safety requirements. The failure statistics analysis is used as well as the evaluation of reliability of inspections. The assumptions regarding the effectiveness of the inspection methods are based on a classification of the accessibility of the welds during the inspection and on the different techniques used for inspection. The probability of defect detection is assumed to depend on the parameters either through logarithmic or logit transformation. As example the modeling of the pipe systems inspection process is analyzed. The means to reduce a number of inspection sites and the cumulative radiation exposure to the NPP inspection personnel with a reduction of overall risk is presented together with used and developed software. The developed software can perform and administrate all the risk evaluations and ensure the possibilities to compare different options and perform sensitivity analysis. The approaches to define an acceptable level of risk are discussed. These approaches with appropriate software in

  13. Diagnostic tests’ decision-making rules based upon analysis of ROC-curves

    Directory of Open Access Journals (Sweden)

    Л. В. Батюк

    2015-10-01

    Full Text Available In this paper we propose the model which substantiates diagnostics decision making based on the analysis of Receiver Operating Characteristic curves (ROC-curves and predicts optimal values of diagnostic indicators of biomedical information. To assess the quality of the test result prediction the standard criteria of the sensitivity and specificity of the model were used. Values of these criteria were calculated for the cases when the sensitivity of the test was greater than specificity by several times, when the number of correct diagnoses was maximal, when the sensitivity of the test was equal to its specificity and the sensitivity of the test was several times greater than the specificity of the test. To assess the significance of the factor characteristics and to compare the prognostic characteristics of models we used mathematical modeling and plotting the ROC-curves. The optimal value of the diagnostic indicator was found to be achieved when the sensitivity of the test is equal to its specificity. The model was adapted to solve the case when the sensitivity of the test is greater than specificity of the test.

  14. Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties

    Science.gov (United States)

    Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.

    2017-12-01

    Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.

  15. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    Science.gov (United States)

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  16. Termination of resuscitation in the prehospital setting: A comparison of decisions in clinical practice vs. recommendations of a termination rule.

    Science.gov (United States)

    Verhaert, Dominique V M; Bonnes, Judith L; Nas, Joris; Keuper, Wessel; van Grunsven, Pierre M; Smeets, Joep L R M; de Boer, Menko Jan; Brouwer, Marc A

    2016-03-01

    Of the proposed algorithms that provide guidance for in-field termination of resuscitation (TOR) decisions, the guidelines for cardiopulmonary resuscitation (CPR) refer to the basic and advanced life support (ALS)-TOR rules. To assess the potential consequences of implementation of the ALS-TOR rule, we performed a case-by-case evaluation of our in-field termination decisions and assessed the corresponding recommendations of the ALS-TOR rule. Cohort of non-traumatic out-of-hospital cardiac arrest (OHCA)-patients who were resuscitated by the ALS-practising emergency medical service (EMS) in the Nijmegen area (2008-2011). The ALS-TOR rule recommends termination in case all following criteria are met: unwitnessed arrest, no bystander CPR, no shock delivery, no return of spontaneous circulation (ROSC). Of the 598 cases reviewed, resuscitative efforts were terminated in the field in 46% and 15% survived to discharge. The ALS-TOR rule would have recommended in-field termination in only 6% of patients, due to high percentages of witnessed arrests (73%) and bystander CPR (54%). In current practice, absence of ROSC was the most important determinant of termination [aOR 35.6 (95% CI 18.3-69.3)]. Weaker associations were found for: unwitnessed and non-public arrests, non-shockable initial rhythms and longer EMS-response times. While designed to optimise hospital transportations, application of the ALS-TOR rule would almost double our hospital transportation rate to over 90% of OHCA-cases due to the favourable arrest circumstances in our region. Prior to implementation of the ALS-TOR rule, local evaluation of the potential consequences for the efficiency of triage is to be recommended and initiatives to improve field-triage for ALS-based EMS-systems are eagerly awaited. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Communicating Optimized Decision Input from Stochastic Turbulence Forecasts

    National Research Council Canada - National Science Library

    Szczes, Jeanne R

    2008-01-01

    .... It demonstrates the methodology and importance of incorporating ambiguity, the uncertainty in forecast uncertainty, into the decision making process using the Taijitu method to estimate ambiguity...

  18. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study possibilities of sequential optimization of α-decision trees relative to different cost functions such as depth, average depth, and number of nodes. For decision trees, we analyze relationships between depth and number of misclassifications. We also discuss results of computer experiments with some datasets from UCI ML Repository. ©Springer-Verlag Berlin Heidelberg 2013.

  19. The spatial decision-supporting system combination of RBR & CBR based on artificial neural network and association rules

    Science.gov (United States)

    Tian, Yangge; Bian, Fuling

    2007-06-01

    The technology of artificial intelligence should be imported on the basis of the geographic information system to bring up the spatial decision-supporting system (SDSS). The paper discusses the structure of SDSS, after comparing the characteristics of RBR and CBR, the paper brings up the frame of a spatial decisional system that combines RBR and CBR, which has combined the advantages of them both. And the paper discusses the CBR in agriculture spatial decisions, the application of ANN (Artificial Neural Network) in CBR, and enriching the inference rule base based on association rules, etc. And the paper tests and verifies the design of this system with the examples of the evaluation of the crops' adaptability.

  20. A decision rule based on goal programming and one-stage models for uncertain multi-criteria mixed decision making and games against nature

    Directory of Open Access Journals (Sweden)

    Helena Gaspars-Wieloch

    2017-01-01

    Full Text Available This paper is concerned with games against nature and multi-criteria decision making under uncertainty along with scenario planning. We focus on decision problems where a deterministic evaluation of criteria is not possible. The procedure we propose is based on weighted goal programming and may be applied when seeking a mixed strategy. A mixed strategy allows the decision maker to select and perform a weighted combination of several accessible alternatives. The new method takes into consideration the decision maker’s preference structure (importance of particular goals and nature (pessimistic, moderate or optimistic attitude towards a given problem. It is designed for one-shot decisions made under uncertainty with unknown probabilities (frequencies, i.e. for decision making under complete uncertainty or decision making under strategic uncertainty. The procedure refers to one-stage models, i.e. models considering combinations of scenarios and criteria (scenario-criterion pairs as distinct meta-attributes, which means that the novel approach can be used in the case of totally independent payoff matrices for particular targets. The algorithm does not require any information about frequencies, which is especially desirable for new decision problems. It can be successfully applied by passive decision makers, as only criteria weights and the coefficient of optimism have to be declared.

  1. Optimal Guaranteed Service Time and Service Level Decision with Time and Service Level Sensitive Demand

    Directory of Open Access Journals (Sweden)

    Sangjun Park

    2014-01-01

    Full Text Available We consider a two-stage supply chain with one supplier and one retailer. The retailer sells a product to customer and the supplier provides a product in a make-to-order mode. In this case, the supplier’s decisions on service time and service level and the retailer’s decision on retail price have effects on customer demand. We develop optimization models to determine the optimal retail price, the optimal guaranteed service time, the optimal service level, and the optimal capacity to maximize the expected profit of the whole supply chain. The results of numerical experiments show that it is more profitable to determine the optimal price, the optimal guaranteed service time, and the optimal service level simultaneously and the proposed model is more profitable in service level sensitive market.

  2. Optimization for decision making linear and quadratic models

    CERN Document Server

    Murty, Katta G

    2010-01-01

    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  3. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  4. Detection of Stator Winding Fault in Induction Motor Using Fuzzy Logic with Optimal Rules

    Directory of Open Access Journals (Sweden)

    Hamid Fekri Azgomi

    2013-04-01

    Full Text Available Induction motors are critical components in many industrial processes. Therefore, swift, precise and reliable monitoring and fault detection systems are required to prevent any further damages. The online monitoring of induction motors has been becoming increasingly important. The main difficulty in this task is the lack of an accurate analytical model to describe a faulty motor. A fuzzy logic approach may help to diagnose traction motor faults. This paper presents a simple method for the detection of stator winding faults (which make up 38% of induction motor failures based on monitoring the line/terminal current amplitudes. In this method, fuzzy logic is used to make decisions about the stator motor condition. In fact, fuzzy logic is reminiscent of human thinking processes and natural language enabling decisions to be made based on vague information. The motor condition is described using linguistic variables. Fuzzy subsets and the corresponding membership functions describe stator current amplitudes. A knowledge base, comprising rule and data bases, is built to support the fuzzy inference. Simulation results are presented to verify the accuracy of motor’s fault detection and knowledge extraction feasibility. The preliminary results show that the proposed fuzzy approach can be used for accurate stator fault diagnosis.

  5. Individual versus Household Migration Decision Rules: Gender and Marital Status Differences in Intentions to Migrate in South Africa.

    Science.gov (United States)

    Gubhaju, Bina; De Jong, Gordon F

    2009-03-01

    This research tests the thesis that the neoclassical micro-economic and the new household economic theoretical assumptions on migration decision-making rules are segmented by gender, marital status, and time frame of intention to migrate. Comparative tests of both theories within the same study design are relatively rare. Utilizing data from the Causes of Migration in South Africa national migration survey, we analyze how individually held "own-future" versus alternative "household well-being" migration decision rules effect the intentions to migrate of male and female adults in South Africa. Results from the gender and marital status specific logistic regressions models show consistent support for the different gender-marital status decision rule thesis. Specifically, the "maximizing one's own future" neoclassical microeconomic theory proposition is more applicable for never married men and women, the "maximizing household income" proposition for married men with short-term migration intentions, and the "reduce household risk" proposition for longer time horizon migration intentions of married men and women. Results provide new evidence on the way household strategies and individual goals jointly affect intentions to move or stay.

  6. Investment Decisions and Depreciation Choices under a Discretionary Tax Depreciation Rule

    NARCIS (Netherlands)

    Wielhouwer, Jacco L.; Wiersma, E.

    2017-01-01

    Prior studies have shown limited impact of the US bonus depreciation rules on firm investments during economic downturns. In this article we study the effects of a set of more flexible rules – discretionary tax depreciation (DTD) – introduced in the Netherlands during the 2009–2011 economic crisis.

  7. Testing decision rules for categorizing species' extinction risk to help develop quantitative listing criteria for the U.S. Endangered Species Act.

    Science.gov (United States)

    Regan, Tracey J; Taylor, Barbara L; Thompson, Grant G; Cochrane, Jean Fitts; Ralls, Katherine; Runge, Michael C; Merrick, Richard

    2013-08-01

    Lack of guidance for interpreting the definitions of endangered and threatened in the U.S. Endangered Species Act (ESA) has resulted in case-by-case decision making leaving the process vulnerable to being considered arbitrary or capricious. Adopting quantitative decision rules would remedy this but requires the agency to specify the relative urgency concerning extinction events over time, cutoff risk values corresponding to different levels of protection, and the importance given to different types of listing errors. We tested the performance of 3 sets of decision rules that use alternative functions for weighting the relative urgency of future extinction events: a threshold rule set, which uses a decision rule of x% probability of extinction over y years; a concave rule set, where the relative importance of future extinction events declines exponentially over time; and a shoulder rule set that uses a sigmoid shape function, where relative importance declines slowly at first and then more rapidly. We obtained decision cutoffs by interviewing several biologists and then emulated the listing process with simulations that covered a range of extinction risks typical of ESA listing decisions. We evaluated performance of the decision rules under different data quantities and qualities on the basis of the relative importance of misclassification errors. Although there was little difference between the performance of alternative decision rules for correct listings, the distribution of misclassifications differed depending on the function used. Misclassifications for the threshold and concave listing criteria resulted in more overprotection errors, particularly as uncertainty increased, whereas errors for the shoulder listing criteria were more symmetrical. We developed and tested the framework for quantitative decision rules for listing species under the U.S. ESA. If policy values can be agreed on, use of this framework would improve the implementation of the ESA by

  8. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  9. Robust Bayesian decision theory applied to optimal dosage.

    Science.gov (United States)

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  10. Do the right thing: the assumption of optimality in lay decision theory and causal judgment.

    Science.gov (United States)

    Johnson, Samuel G B; Rips, Lance J

    2015-03-01

    Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent's perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent's actual and counterfactual choices (Experiments 1-3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5-6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Visualising Pareto-optimal trade-offs helps move beyond monetary-only criteria for water management decisions

    Science.gov (United States)

    Hurford, Anthony; Harou, Julien

    2014-05-01

    Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.

  12. Reward Rate Optimization in Two-Alternative Decision Making: Empirical Tests of Theoretical Predictions

    Science.gov (United States)

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.

    2009-01-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…

  13. Optimization-based decision support systems for planning problems in processing industries

    NARCIS (Netherlands)

    Claassen, G.D.H.

    2014-01-01

    Summary

    Optimization-based decision support systems for planning problems in processing industries

    Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in

  14. A Branch-and-Price approach to find optimal decision trees

    NARCIS (Netherlands)

    Firat, M.; Crognier, Guillaume; Gabor, Adriana; Zhang, Y.

    2018-01-01

    In Artificial Intelligence (AI) field, decision trees have gained certain importance due to their effectiveness in solving classification and regression problems. Recently, in the literature we see finding optimal decision trees are formulated as Mixed Integer Linear Programming (MILP) models. This

  15. Optimization-based decision support systems for planning problems in processing industries

    OpenAIRE

    Claassen, G.D.H.

    2014-01-01

    Summary Optimization-based decision support systems for planning problems in processing industries Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in practice. The tremendous progress in hard- and software of the past decades was an important gateway for developing computerized systems that are able to support decision making on different levels within enterprises. T...

  16. Graph-related optimization and decision support systems

    CERN Document Server

    Krichen, Saoussen

    2014-01-01

    Constrained optimization is a challenging branch of operations research that aims to create a model which has a wide range of applications in the supply chain, telecommunications and medical fields. As the problem structure is split into two main components, the objective is to accomplish the feasible set framed by the system constraints. The aim of this book is expose optimization problems that can be expressed as graphs, by detailing, for each studied problem, the set of nodes and the set of edges.  This graph modeling is an incentive for designing a platform that integrates all optimizatio

  17. Decision optimization of case-based computer-aided decision systems using genetic algorithms with application to mammography

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates

  18. Learning decision trees with flexible constraints and objectives using integer optimization

    NARCIS (Netherlands)

    Verwer, S.; Zhang, Y.

    2017-01-01

    We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows

  19. Cost Information – an Objective Necessity in Optimizing Decision Making

    OpenAIRE

    Petre Mihaela – Cosmina; Petroianu Grazia - Oana

    2012-01-01

    An overall growth can be registered at macro and micro level without achieving a development and this only under conditions of continuous improvement methods and techniques of organization and management within the unit. Cost and cost information play an important role being considered and recognized as useful and effective tools to reach any leader. They have features such as multiple facets to facilitate continuous improvement towards business unit. Cost awareness represents a decisive fact...

  20. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  1. Optimization and decision support systems for supply chains

    CERN Document Server

    Corominas, Albert; Miranda, João

    2017-01-01

    This contributed volume presents a collection of materials on supply chain management including industry-based case studies addressing petrochemical, pharmaceutical, manufacturing and reverse logistics topics. Moreover, the book covers sustainability issues, as well as optimization approaches. The target audience comprises academics, industry managers, and practitioners in the field of supply chain management, being the book also beneficial for graduate students.

  2. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  3. Intelligent Hypothermia Care System using Ant ‎Colony Optimization for Rules Prediction

    Directory of Open Access Journals (Sweden)

    Hayder Naser Khraibet

    2017-12-01

    Full Text Available Intelligent Hypothermia Care System (IHCS is an intelligence system uses set of methodologies, algorithms, architectures and processes to determine where patients in a postoperative recovery area must be sent. Hypothermia is a significant concern after surgery. This paper utilizes the classification task in data mining to propose an intelligent technique to predict where to send a patient after surgery: intensive care unit, general floor or home. To achieve this goal, this paper evaluates the performance of decision tree algorithm, exemplifying the deterministic approach, against the AntMiner algorithm, exemplifying the heuristic approach, to choose the best approach in detecting the patient’s status. Results show the outperformance of the heuristic approach. The implication of this proposal will be twofold: in hypothermia treatment and in the application of ant colony optimization

  4. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    Science.gov (United States)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  5. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model

    Directory of Open Access Journals (Sweden)

    Rajkumar Rajavel

    2015-01-01

    Full Text Available Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.

  6. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model.

    Science.gov (United States)

    Rajavel, Rajkumar; Thangarathinam, Mala

    2015-01-01

    Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.

  7. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    Science.gov (United States)

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  8. Selecting short-statured children needing growth hormone testing: Derivation and validation of a clinical decision rule

    Directory of Open Access Journals (Sweden)

    Bréart Gérard

    2008-07-01

    Full Text Available Abstract Background Numerous short-statured children are evaluated for growth hormone (GH deficiency (GHD. In most patients, GH provocative tests are normal and are thus in retrospect unnecessary. Methods A retrospective cohort study was conducted to identify predictors of growth hormone (GH deficiency (GHD in children seen for short stature, and to construct a very sensitive and fairly specific predictive tool to avoid unnecessary GH provocative tests. GHD was defined by the presence of 2 GH concentration peaks Results The initial study included 167 patients, 36 (22% of whom had GHD, including 5 (3% with certain GHD. Independent predictors of GHD were: growth rate Conclusion We have derived and performed an internal validation of a highly sensitive decision rule that could safely help to avoid more than 2/3 of the unnecessary GH tests. External validation of this rule is needed before any application.

  9. The impact of chief executive officer optimism on hospital strategic decision making.

    Science.gov (United States)

    Langabeer, James R; Yao, Emery

    2012-01-01

    Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating

  10. Rules for Rational Decision Making: An Experiment with 15- and 16-Year Old Students

    Science.gov (United States)

    Guerra, Ana Teresa Antequera; Febles, Maria Candelaria Espinel

    2012-01-01

    Multicriteria analysis constitutes a way to model decision processes, which allow the decision maker to assess the possible implications each course of action may entail. A multicriteria problem is chosen from the Programme for International Student Assessment 2003 Report and then extended to include questions involving a choice of preferences and…

  11. Moral Behavior as Rule Governed Behavior: Person and System Effects on Moral Decision Making.

    Science.gov (United States)

    Kurtines, William M.; And Others

    Recent approaches to research on moral development have considered the preeminence of situational factors in moral development and moral behavior. An open systems approach emphasizes the interactive effects of person and situation variables on moral decision-making. The interactive effects of three sets of variables on moral decision-making were…

  12. A Belief Rule-Based (BRB) Decision Support System for Assessing Clinical Asthma Suspicion

    DEFF Research Database (Denmark)

    Hossain, Mohammad Shahadat; Hossain, Emran; Khalid, Md. Saifuddin

    2014-01-01

    conditions of uncertainty. The Belief Rule-Based Inference Methodology Using the Evidential Reasoning (RIMER) approach was adopted to develop this expert system; which is named the Belief Rule-Based Expert System (BRBES). The system can handle various types of uncertainty in knowledge representation...... and inference procedures. The knowledge base of this system was constructed by using real patient data and expert opinion. Practical case studies were used to validate the system. The system-generated results are more effective and reliable in terms of accuracy than the results generated by a manual system....

  13. Optimal statistical decisions about some alternative financial models

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; Stummer, W.

    2007-01-01

    Roč. 137, č. 2 (2007), s. 441-471 ISSN 0304-4076 R&D Projects: GA MŠk(CZ) 1M0572; GA ČR GA201/02/1391; GA AV ČR IAA1075403 Institutional research plan: CEZ:AV0Z10750506 Keywords : Black-Scholes-Merton models * Relative entropies * Power divergences * Hellinger intergrals * Total variation distance * Bayesian decisions * Neyman-Pearson testing Subject RIV: BD - Theory of Information Impact factor: 1.990, year: 2007

  14. Risk Acceptance Criteria and/or Decision optimization

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1996-01-01

    Acceptance criteria applied in practical risk analysis are recapitulated including the concept of rist profile. Modelling of risk profiles is illustrated on the basis of compound Poisson process models. The current practice of authoritative acceptance criteria formulation is discussed from...... a decision theoretical point of view. It is argued that the phenomenon of risk aversion rather than being of concern to the authority should be of concern to the owner. Finally it is discussed whether there is an ethical problem when formally capitalising human lives with a positive interest rate. Keywords......: Risk acceptance, Risk profile, Compound Poisson model for risk profile, Capitalization of human life, Risk aversion....

  15. Adaptive Decision Making Using Probabilistic Programming and Stochastic Optimization

    Science.gov (United States)

    2018-01-01

    world optimization problems (and hence 16 Approved for Public Release (PA); Distribution Unlimited Pred. demand (uncertain; discrete ...simplify the setting, we further assume that the demands are discrete , taking on values d1, . . . , dk with probabilities (conditional on x) (pθ)i ≡ p...Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math ., 2009. Anthony V Fiacco and Yo Ishizuka. Sensitivity and stability

  16. Optimal static allocation decisions in the presence of portfolio insurance

    OpenAIRE

    Goltz, Felix; Martellini, Lionel; Şimşek, Koray Deniz; Simsek, Koray Deniz

    2008-01-01

    The focus of this paper is to determine what fraction a myopic risk-averse investor should allocate to investment strategies with convex exposure to stock market returns in a general economy with stochastically time-varying interest rates and equity risk premium. Our conclusion is that typical investors should optimally allocate a sizable fraction of their portfolio to such portfolio insurance strategies, and the associated utility gains are significant. While the fact that static investors w...

  17. The Application of Time-Delay Dependent H∞ Control Model in Manufacturing Decision Optimization

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2015-01-01

    Full Text Available This paper uses a time-delay dependent H∞ control model to analyze the effect of manufacturing decisions on the process of transmission from resources to capability. We establish a theoretical framework of manufacturing management process based on three terms: resource, manufacturing decision, and capability. Then we build a time-delay H∞ robust control model to analyze the robustness of manufacturing management. With the state feedback controller between manufacturing resources and decision, we find that there is an optimal decision to adjust the process of transmission from resources to capability under uncertain environment. Finally, we provide an example to prove the robustness of this model.

  18. Economic irrationality is optimal during noisy decision making.

    Science.gov (United States)

    Tsetsos, Konstantinos; Moran, Rani; Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher

    2016-03-15

    According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a "selective integration" policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against "late" noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing.

  19. Selecting Pesticides and Nonchemical Alternatives: Green Thumbs' Rules of Thumb Decision Tools.

    Science.gov (United States)

    Grieshop, James I.; And Others

    1992-01-01

    A sample of 78 (of 320) home gardeners use rules of thumb (heuristics) to choose between chemical pesticides and nonchemical alternatives. Pesticides rank low in 24 choice attributes where alternatives rank high, and vice versa. Gender, age, and years of pesticide use correlate with pesticide selection. (SK)

  20. Dispositional Optimism as a Correlate of Decision-Making Styles in Adolescence

    Directory of Open Access Journals (Sweden)

    Paola Magnano

    2015-06-01

    Full Text Available Despite the numerous psychological areas in which optimism has been studied, including career planning, only a small amount of research has been done to investigate the relationship between optimism and decision-making styles. Consequently, we have investigated the role of dispositional optimism as a correlate of different decision-making styles, in particular, positive for effective styles and negative for ineffective ones (doubtfulness, procrastination, and delegation. Data were gathered through questionnaires administered to 803 Italian adolescents in their last 2 years of high schools with different fields of study, each at the beginning stages of planning for their professional future. A paper questionnaire was completed containing measures of dispositional optimism and career-related decision styles, during a vocational guidance intervention conducted at school. Data were analyzed using stepwise multiple regression. Results supported the proposed model by showing optimism to be a strong correlate of decision-making styles, thereby offering important intervention guidelines aimed at modifying unrealistically negative expectations regarding their future and helping students learn adaptive decision-making skills.

  1. A NEW FRAMEWORK FOR GEOSPATIAL SITE SELECTION USING ARTIFICIAL NEURAL NETWORKS AS DECISION RULES: A CASE STUDY ON LANDFILL SITES

    Directory of Open Access Journals (Sweden)

    S. K. M. Abujayyab

    2015-10-01

    Full Text Available This paper briefly introduced the theory and framework of geospatial site selection (GSS and discussed the application and framework of artificial neural networks (ANNs. The related literature on the use of ANNs as decision rules in GSS is scarce from 2000 till 2015. As this study found, ANNs are not only adaptable to dynamic changes but also capable of improving the objectivity of acquisition in GSS, reducing time consumption, and providing high validation. ANNs make for a powerful tool for solving geospatial decision-making problems by enabling geospatial decision makers to implement their constraints and imprecise concepts. This tool offers a way to represent and handle uncertainty. Specifically, ANNs are decision rules implemented to enhance conventional GSS frameworks. The main assumption in implementing ANNs in GSS is that the current characteristics of existing sites are indicative of the degree of suitability of new locations with similar characteristics. GSS requires several input criteria that embody specific requirements and the desired site characteristics, which could contribute to geospatial sites. In this study, the proposed framework consists of four stages for implementing ANNs in GSS. A multilayer feed-forward network with a backpropagation algorithm was used to train the networks from prior sites to assess, generalize, and evaluate the outputs on the basis of the inputs for the new sites. Two metrics, namely, confusion matrix and receiver operating characteristic tests, were utilized to achieve high accuracy and validation. Results proved that ANNs provide reasonable and efficient results as an accurate and inexpensive quantitative technique for GSS.

  2. ERDOS 1.0. Emergency response decisions as problems of optimal stopping

    International Nuclear Information System (INIS)

    Pauwels, N.

    1998-11-01

    The ERDOS-software is a stochastic dynamic program to support the decision problem of preventively evacuating the workers of an industrial company threatened by a nuclear accident taking place in the near future with a particular probability. ERDOS treats this problem as one of optimal stopping: the governmental decision maker initially holds a call option enabling him to postpone the evacuation decision and observe the further evolution of the alarm situation. As such, he has to decide on the optimal point in time to exercise this option, i.e. to take the irreversible decision to evacuate the threatened workers. ERDOS allows to calculate the expected costs of an optimal intervention strategy and to compare this outcome with the costs resulting from a myopic evacuation decision, ignoring the prospect of more complete information at later stages of the decision process. Furthermore, ERDOS determines the free boundary, giving the critical severity as a function of time that will trigger immediate evacuation in case it is exceeded. Finally, the software provides useful insights in the financial implications of loosing time during the initial stages of the decision process (due to the gathering of information, discussions on the intervention strategy and so on)

  3. Differential Contributions of Nucleus Accumbens Subregions to Cue-Guided Risk/Reward Decision Making and Implementation of Conditional Rules.

    Science.gov (United States)

    Floresco, Stan B; Montes, David R; Tse, Maric M T; van Holstein, Mieke

    2018-02-21

    The nucleus accumbens (NAc) is a key node within corticolimbic circuitry for guiding action selection and cost/benefit decision making in situations involving reward uncertainty. Preclinical studies have typically assessed risk/reward decision making using assays where decisions are guided by internally generated representations of choice-outcome contingencies. Yet, real-life decisions are often influenced by external stimuli that inform about likelihoods of obtaining rewards. How different subregions of the NAc mediate decision making in such situations is unclear. Here, we used a novel assay colloquially termed the "Blackjack" task that models these types of situations. Male Long-Evans rats were trained to choose between one lever that always delivered a one-pellet reward and another that delivered four pellets with different probabilities [either 50% (good-odds) or 12.5% (poor-odds)], which were signaled by one of two auditory cues. Under control conditions, rats selected the large/risky option more often on good-odds versus poor-odds trials. Inactivation of the NAc core caused indiscriminate choice patterns. In contrast, NAc shell inactivation increased risky choice, more prominently on poor-odds trials. Additional experiments revealed that both subregions contribute to auditory conditional discrimination. NAc core or shell inactivation reduced Pavlovian approach elicited by an auditory CS+, yet shell inactivation also increased responding during presentation of a CS-. These data highlight distinct contributions for NAc subregions in decision making and reward seeking guided by discriminative stimuli. The core is crucial for implementation of conditional rules, whereas the shell refines reward seeking by mitigating the allure of larger, unlikely rewards and reducing expression of inappropriate or non-rewarded actions. SIGNIFICANCE STATEMENT Using external cues to guide decision making is crucial for adaptive behavior. Deficits in cue-guided behavior have been

  4. Review of experimental studies in social psychology of small groups when an optimal choice exists and application to operating room management decision-making.

    Science.gov (United States)

    Prahl, Andrew; Dexter, Franklin; Braun, Michael T; Van Swol, Lyn

    2013-11-01

    Because operating room (OR) management decisions with optimal choices are made with ubiquitous biases, decisions are improved with decision-support systems. We reviewed experimental social-psychology studies to explore what an OR leader can do when working with stakeholders lacking interest in learning the OR management science but expressing opinions about decisions, nonetheless. We considered shared information to include the rules-of-thumb (heuristics) that make intuitive sense and often seem "close enough" (e.g., staffing is planned based on the average workload). We considered unshared information to include the relevant mathematics (e.g., staffing calculations). Multiple studies have shown that group discussions focus more on shared than unshared information. Quality decisions are more likely when all group participants share knowledge (e.g., have taken a course in OR management science). Several biases in OR management are caused by humans' limited abilities to estimate tails of probability distributions in their heads. Groups are more susceptible to analogous biases than are educated individuals. Since optimal solutions are not demonstrable without groups sharing common language, only with education of most group members can a knowledgeable individual influence the group. The appropriate model of decision-making is autocratic, with information obtained from stakeholders. Although such decisions are good quality, the leaders often are disliked and the decisions considered unjust. In conclusion, leaders will find the most success if they do not bring OR management operational decisions to groups, but instead act autocratically while obtaining necessary information in 1:1 conversations. The only known route for the leader making such decisions to be considered likable and for the decisions to be considered fair is through colleagues and subordinates learning the management science.

  5. Application of Bayesian statistical decision theory to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab

  6. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    Science.gov (United States)

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  7. Accounting standards and earnings management : The role of rules-based and principles-based accounting standards and incentives on accounting and transaction decisions

    NARCIS (Netherlands)

    Beest, van F.

    2012-01-01

    This book examines the effect that rules-based and principles-based accounting standards have on the level and nature of earnings management decisions. A cherry picking experiment is conducted to test the hypothesis that a substitution effect is expected from accounting decisions to transaction

  8. Delusions of success. How optimism undermines executives' decisions.

    Science.gov (United States)

    Lovallo, Dan; Kahneman, Daniel

    2003-07-01

    The evidence is disturbingly clear: Most major business initiatives--mergers and acquisitions, capital investments, market entries--fail to ever pay off. Economists would argue that the low success rate reflects a rational assessment of risk, with the returns from a few successes outweighing the losses of many failures. But two distinguished scholars of decision making, Dan Lovallo of the University of New South Wales and Nobel laureate Daniel Kahneman of Princeton University, provide a very different explanation. They show that a combination of cognitive biases (including anchoring and competitor neglect) and organizational pressures lead managers to make overly optimistic forecasts in analyzing proposals for major investments. By exaggerating the likely benefits of a project and ignoring the potential pitfalls, they lead their organizations into initiatives that are doomed to fall well short of expectations. The biases and pressures cannot be escaped, the authors argue, but they can be tempered by applying a very different method of forecasting--one that takes a much more objective "outside view" of an initiative's likely outcome. This outside view, also known as reference-class forecasting, completely ignores the details of the project at hand; instead, it encourages managers to examine the experiences of a class of similar projects, to lay out a rough distribution of outcomes for this reference class, and then to position the current project in that distribution. The outside view is more likely than the inside view to produce accurate forecasts--and much less likely to deliver highly unrealistic ones, the authors say.

  9. End-of-life decisions and the reinvented Rule of Double Effect: a critical analysis.

    Science.gov (United States)

    Lindblad, Anna; Lynöe, Niels; Juth, Niklas

    2014-09-01

    The Rule of Double Effect (RDE) holds that it may be permissible to harm an individual while acting for the sake of a proportionate good, given that the harm is not an intended means to the good but merely a foreseen side-effect. Although frequently used in medical ethical reasoning, the rule has been repeatedly questioned in the past few decades. However, Daniel Sulmasy, a proponent who has done a lot of work lately defending the RDE, has recently presented a reformulated and more detailed version of the rule. Thanks to its greater precision, this reinvented RDE avoids several problems thought to plague the traditional RDE. Although an improvement compared with the traditional version, we argue that Sulmasy's reinvented RDE will not stand closer scrutiny. Not only has the range of proper applicability narrowed significantly, but, more importantly, Sulmasy fails to establish that there is a morally relevant distinction between intended and foreseen effects. In particular, he fails to establish that there is any distinction that can account for the alleged moral difference between sedation therapy and euthanasia. © 2012 John Wiley & Sons Ltd.

  10. Application of goal programming to decision problem on optimal allocation of radiation workers

    International Nuclear Information System (INIS)

    Sa, Sangduk; Narita, Masakuni

    1993-01-01

    This paper is concerned with an optimal planning in a multiple objective decision-making problem of allocating radiation workers to workplaces associated with occupational exposure. The model problem is formulated with the application of goal programming which effectively followed up diverse and conflicting factors influencing the optimal decision. The formulation is based on the data simulating the typical situations encountered at the operating facilities such as nuclear power plants where exposure control is critical to the management. Multiple goals set by the decision-maker/manager who has the operational responsibilities for radiological protection are illustrated in terms of work requirements, exposure constraints of the places, desired allocation of specific personnel and so on. Test results of the model are considered to indicate that the model structure and its solution process can provide the manager with a good set of analysis of his problems in implementing the optimization review of radiation protection during normal operation. (author)

  11. Optimal decision making on the basis of evidence represented in spike trains.

    Science.gov (United States)

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  12. Optimization of protection as a decision-making tool for radioactive waste disposal

    International Nuclear Information System (INIS)

    Bragg, K.

    1988-03-01

    This paper discusses whether optimization of radiation protection is a workable or helpful concept or tool with respect to decisions in the field of long-term radioactive waste management. Examples of three waste types (high-level, low-level and uranium mine tailings) are used to illustrate that actual decisions are made taking account of more complex factors and that optimization of protection plays a relatively minor role. It is thus concluded that it is not a useful general tool for waste management decision-making. Discussion of the nature of the differences between technical and non-technical factors is also presented along with suggestions to help facilitate future decision-making

  13. People adopt optimal policies in simple decision-making, after practice and guidance.

    Science.gov (United States)

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  14. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    International Nuclear Information System (INIS)

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-01-01

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets

  15. MICE or NICE? An economic evaluation of clinical decision rules in the diagnosis of heart failure in primary care.

    Science.gov (United States)

    Monahan, Mark; Barton, Pelham; Taylor, Clare J; Roalfe, Andrea K; Hobbs, F D Richard; Cowie, Martin; Davis, Russell; Deeks, Jon; Mant, Jonathan; McCahon, Deborah; McDonagh, Theresa; Sutton, George; Tait, Lynda

    2017-08-15

    Detection and treatment of heart failure (HF) can improve quality of life and reduce premature mortality. However, symptoms such as breathlessness are common in primary care, have a variety of causes and not all patients require cardiac imaging. In systems where healthcare resources are limited, ensuring those patients who are likely to have HF undergo appropriate and timely investigation is vital. A decision tree was developed to assess the cost-effectiveness of using the MICE (Male, Infarction, Crepitations, Edema) decision rule compared to other diagnostic strategies to identify HF patients presenting to primary care. Data from REFER (REFer for EchocaRdiogram), a HF diagnostic accuracy study, was used to determine which patients received the correct diagnosis decision. The model adopted a UK National Health Service (NHS) perspective. The current recommended National Institute for Health and Care Excellence (NICE) guidelines for identifying patients with HF was the most cost-effective option with a cost of £4400 per quality adjusted life year (QALY) gained compared to a "do nothing" strategy. That is, patients presenting with symptoms suggestive of HF should be referred straight for echocardiography if they had a history of myocardial infarction or if their NT-proBNP level was ≥400pg/ml. The MICE rule was more expensive and less effective than the other comparators. Base-case results were robust to sensitivity analyses. This represents the first cost-utility analysis comparing HF diagnostic strategies for symptomatic patients. Current guidelines in England were the most cost-effective option for identifying patients for confirmatory HF diagnosis. The low number of HF with Reduced Ejection Fraction patients (12%) in the REFER patient population limited the benefits of early detection. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. 48 CFR 6105.506 - Reconsideration of Board decision [Rule 506].

    Science.gov (United States)

    2010-10-01

    ... affected employee. Such requests must be received by the Board within 30 calendar days after the date the... the affected employee making the request is located outside the 50 states and the District of Columbia.... Mere disagreement with a decision or re-argument of points already made is not a sufficient ground for...

  17. Moral Behavior as Rule Governed Behavior: A Psychosocial Role-Theoretical Approach to Moral Decision Making.

    Science.gov (United States)

    Kurtines, William M.

    Research on moral development and behavior has traditionally emphasized person related variables such as level or stage of moral reasoning, individual differences in moral traits and dispositions, or past reinforcement history. The effects of context on moral action and decision, in contrast, have received relatively little attention. It is…

  18. Application of Bayesian Decision Theory Based on Prior Information in the Multi-Objective Optimization Problem

    Directory of Open Access Journals (Sweden)

    Xia Lei

    2010-12-01

    Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.

  19. Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making

    Directory of Open Access Journals (Sweden)

    Ahmad Mortazavi

    2014-01-01

    Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.

  20. A model of reward- and effort-based optimal decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Lionel Rigoux

    Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.

  1. Discovery of Transition Rules for Cellular Automata Using Artificial Bee Colony and Particle Swarm Optimization Algorithms in Urban Growth Modeling

    Directory of Open Access Journals (Sweden)

    Fereydoun Naghibi

    2016-12-01

    Full Text Available This paper presents an advanced method in urban growth modeling to discover transition rules of cellular automata (CA using the artificial bee colony (ABC optimization algorithm. Also, comparisons between the simulation results of CA models optimized by the ABC algorithm and the particle swarm optimization algorithms (PSO as intelligent approaches were performed to evaluate the potential of the proposed methods. According to previous studies, swarm intelligence algorithms for solving optimization problems such as discovering transition rules of CA in land use change/urban growth modeling can produce reasonable results. Modeling of urban growth as a dynamic process is not straightforward because of the existence of nonlinearity and heterogeneity among effective involved variables which can cause a number of challenges for traditional CA. ABC algorithm, the new powerful swarm based optimization algorithms, can be used to capture optimized transition rules of CA. This paper has proposed a methodology based on remote sensing data for modeling urban growth with CA calibrated by the ABC algorithm. The performance of ABC-CA, PSO-CA, and CA-logistic models in land use change detection is tested for the city of Urmia, Iran, between 2004 and 2014. Validations of the models based on statistical measures such as overall accuracy, figure of merit, and total operating characteristic were made. We showed that the overall accuracy of the ABC-CA model was 89%, which was 1.5% and 6.2% higher than those of the PSO-CA and CA-logistic model, respectively. Moreover, the allocation disagreement (simulation error of the simulation results for the ABC-CA, PSO-CA, and CA-logistic models are 11%, 12.5%, and 17.2%, respectively. Finally, for all evaluation indices including running time, convergence capability, flexibility, statistical measurements, and the produced spatial patterns, the ABC-CA model performance showed relative improvement and therefore its superiority was

  2. Spaceborne construction and operations planning - Decision rules for selecting EVA, telerobot, and combined work-systems

    Science.gov (United States)

    Smith, Jeffrey H.

    1992-01-01

    An approach is presented for selecting an appropriate work-system for performing construction and operations tasks by humans and telerobots. The decision to use extravehicular activity (EVA) performed by astronauts, extravehicular robotics (EVR), or a combination of EVA and EVR is determined by the ratio of the marginal costs of EVA, EVR, and IVA. The approach proposed here is useful for examining cost trade-offs between tasks and performing trade studies of task improvement techniques (human or telerobotic).

  3. A cognitive decision agent architecture for optimal energy management of microgrids

    International Nuclear Information System (INIS)

    Velik, Rosemarie; Nicolay, Pascal

    2014-01-01

    Highlights: • We propose an optimization approach for energy management in microgrids. • The optimizer emulates processes involved in human decision making. • Optimization objectives are energy self-consumption and financial gain maximization. • We gain improved optimization results in significantly reduced computation time. - Abstract: Via the integration of renewable energy and storage technologies, buildings have started to change from passive (electricity) consumers to active prosumer microgrids. Along with this development come a shift from centralized to distributed production and consumption models as well as discussions about the introduction of variable demand–supply-driven grid electricity prices. Together with upcoming ICT and automation technologies, these developments open space to a wide range of novel energy management and energy trading possibilities to optimally use available energy resources. However, what is considered as an optimal energy management and trading strategy heavily depends on the individual objectives and needs of a microgrid operator. Accordingly, elaborating the most suitable strategy for each particular system configuration and operator need can become quite a complex and time-consuming task, which can massively benefit from computational support. In this article, we introduce a bio-inspired cognitive decision agent architecture for optimized, goal-specific energy management in (interconnected) microgrids, which are additionally connected to the main electricity grid. For evaluating the performance of the architecture, a number of test cases are specified targeting objectives like local photovoltaic energy consumption maximization and financial gain maximization. Obtained outcomes are compared against a modified simulating annealing optimization approach in terms of objective achievement and computational effort. Results demonstrate that the cognitive decision agent architecture yields improved optimization results in

  4. Optimization of protection as a decision-making tool, for radioactive waste disposal

    International Nuclear Information System (INIS)

    Bragg, K.

    1988-01-01

    Politically-based considerations and processes including public perception and confidence appear to be the basis for real decisions affecting waste management activities such as siting, construction, operation and monitoring. Optimization of radiation protection is not a useful general tool for waste disposal decision making. Optimization of radiation protection is essentially a technical tool which can, under appropriate circumstances, provide a clear preference among major management options. The level of discrimination will be case-specific but, in general, only fairly coarse differences can be discriminated. The preferences determined by optimization of protection tend not to be related to the final choices made for disposal of radioactive wastes. Tools such as multi-attribute analysis are very useful as they provide a convenient means to rationalize the real decisions and give them some air of technical respectability. They do not, however, provide the primary basis for the decisions. Technical experts must develop an awareness of the non-technical approach to decision making an attempt to adjust their method of analyses and their presentation of information to encourage dialogue rather than confrontation. Simple expressions of technical information will be needed and the use of analogues should prove helpful

  5. Conceptual air sparging decision tool in support of the development of an air sparging optimization decision tool

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    The enclosed document describes a conceptual decision tool (hereinafter, Tool) for determining applicability of and for optimizing air sparging systems. The Tool was developed by a multi-disciplinary team of internationally recognized experts in air sparging technology, lead by a group of project and task managers at Parsons Engineering Science, Inc. (Parsons ES). The team included Mr. Douglas Downey and Dr. Robert Hinchee of Parsons ES, Dr. Paul Johnson of Arizona State University, Dr. Richard Johnson of Oregon Graduate Institute, and Mr. Michael Marley of Envirogen, Inc. User Community Panel Review was coordinated by Dr. Robert Siegrist of Colorado School of Mines (also of Oak Ridge National Laboratory) and Dr. Thomas Brouns of Battelle/Pacific Northwest Laboratory. The Tool is intended to provide guidance to field practitioners and environmental managers for evaluating the applicability and optimization of air sparging as remedial action technique.

  6. Optimal decision procedures for satisfiability in fragments of alternating-time temporal logics

    DEFF Research Database (Denmark)

    Goranko, Valentin; Vester, Steen

    2014-01-01

    We consider several natural fragments of the alternating-time temporal logics ATL*and ATL with restrictions on the nesting between temporal operators and strate-gicquantifiers. We develop optimal decision procedures for satisfiability in these fragments, showing that they have much lower complexi...

  7. Package of procedures for the decision of optimization tasks by the method of branches and borders

    OpenAIRE

    Nestor, Natalia

    2012-01-01

    The practical aspects of realization of method of branches and borders are examined. The structure of package of procedures is pointed for implementation of basic operations at the decision of optimization tasks. A package is projected as a programmatic kernel which can be used for the various tasks of exhaustive search with returning.

  8. Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations

    Science.gov (United States)

    Papadouris, Nicos

    2012-01-01

    This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…

  9. Do different methods of modeling statin treatment effectiveness influence the optimal decision?

    NARCIS (Netherlands)

    B.J.H. van Kempen (Bob); B.S. Ferket (Bart); A. Hofman (Albert); S. Spronk (Sandra); E.W. Steyerberg (Ewout); M.G.M. Hunink (Myriam)

    2012-01-01

    textabstractPurpose. Modeling studies that evaluate statin treatment for the prevention of cardiovascular disease (CVD) use different methods to model the effect of statins. The aim of this study was to evaluate the impact of using different modeling methods on the optimal decision found in such

  10. Privacy preserving mechanisms for optimizing cross-organizational collaborative decisions based on the Karmarkar algorithm

    NARCIS (Netherlands)

    Zhu, H.; Liu, H.W.; Ou, Carol; Davison, R.M.; Yang, Z.R.

    2017-01-01

    Cross-organizational collaborative decision-making involves a great deal of private information which companies are often reluctant to disclose, even when they need to analyze data collaboratively. The lack of effective privacy-preserving mechanisms for optimizing cross-organizational collaborative

  11. Methods for providing decision makers with optimal solutions for multiple objectives that change over time

    CSIR Research Space (South Africa)

    Greeff, M

    2010-09-01

    Full Text Available Decision making - with the goal of finding the optimal solution - is an important part of modern life. For example: In the control room of an airport, the goals or objectives are to minimise the risk of airplanes colliding, minimise the time that a...

  12. Towards optimal decision making in personalized medicine : Potential value assessment of biomarkers in heart failure exemplars

    NARCIS (Netherlands)

    Cao, Qi

    2016-01-01

    Treatment selection based on average effects observed in entire target population masks variation among patients (heterogeneity) and may result in less than optimal decision making. Personalized medicine is a new and complex concept, which aims to improve health by offering more tailored and

  13. Optimization of decision making to avoid stochastically predicted air traffic conflicts

    Directory of Open Access Journals (Sweden)

    В.М. Васильєв

    2005-01-01

    Full Text Available  The method of decision-making optimization on planning an aircraft trajectory to avoid potential conflict with restricted minimal level of separation standard is proposed. Evaluation and monitoring the conflict probability are made using the probabilistic composite method.

  14. Reward optimization in the primate brain: a probabilistic model of decision making under uncertainty.

    Directory of Open Access Journals (Sweden)

    Yanping Huang

    Full Text Available A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs. We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1 computing beliefs (posterior distributions over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2 selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.

  15. Evaluation of the need for stochastic optimization of out-of-core nuclear fuel management decisions

    International Nuclear Information System (INIS)

    Thomas, R.L. Jr.

    1989-01-01

    Work has been completed on utilizing mathematical optimization techniques to optimize out-of-core nuclear fuel management decisions. The objective of such optimization is to minimize the levelized fuel cycle cost over some planning horizon. Typical decision variables include feed enrichments and number of assemblies, burnable poison requirements, and burned fuel to reinsert for every cycle in the planning horizon. Engineering constraints imposed consist of such items as discharge burnup limits, maximum enrichment limit, and target cycle energy productions. Earlier the authors reported on the development of the OCEON code, which employs the integer Monte Carlo Programming method as the mathematical optimization method. The discharge burnpups, and feed enrichment and burnable poison requirements are evaluated, initially employing a linear reactivity core physics model and refined using a coarse mesh nodal model. The economic evaluation is completed using a modification of the CINCAS methodology. Interest now is to assess the need for stochastic optimization, which will account for cost components and cycle energy production uncertainties. The implication of the present studies is that stochastic optimization in regard to cost component uncertainties need not be completed since deterministic optimization will identify nearly the same family of near-optimum cycling schemes

  16. A methodological model to assist in the optimization and risk management of mining investment decisions

    International Nuclear Information System (INIS)

    Botin, Jose A; Guzman, Ronald R; Smith, Martin L

    2011-01-01

    Identifying, quantifying, and minimizing technical risks associated with investment decisions is a key challenge for mineral industry decision makers and investors. However, risk analysis in most bankable mine feasibility studies are based on the stochastic modeling of project N et Present Value (NPV)which, in most cases, fails to provide decision makers with a truly comprehensive analysis of risks associated with technical and management uncertainty and, as a result, are of little use for risk management and project optimization. This paper presents a value-chain risk management approach where project risk is evaluated for each step of the project life cycle, from exploration to mine closure, and risk management is performed as a part of a stepwise value-added optimization process.

  17. OPTIMAL BUSINESS DECISION SYSTEM FOR MULTINATIONALS: A MULTIFACTOR ANALYSIS OF SELECTED MANUFACTURING FIRMS

    Directory of Open Access Journals (Sweden)

    Oforegbunam Thaddeus Ebiringa

    2011-03-01

    Full Text Available Traditional MIS has been made more effective through the integration of organization, human andtechnology factors into a decision matrix. The study is motivated by the need to find an optimal mixof interactive factors that will optimize the result of decision to apply ICT to manufacturingprocesses. The study used Factor analysis model based on the sampled opinion of forty (40operations/production managers and two thousand (2000 production line workers of three leadingmanufacturing firms: Uniliver Plc., PZ Plc, and Nigerian Breweries Plc operating in Aba IndustrialEstate of Nigeria. The results shows that a progressive mixed factor loading matrix, based on thepreferred ordered importance of resources factors in the formulation, implementation, monitoring,control and evaluation of ICT projects of the selected firms led to an average capability improvementof 0.764 in decision efficiency. This is considered strategic for achieving balanced corporate growthand development.

  18. Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets

    DEFF Research Database (Denmark)

    Saez Gallego, Javier

    patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...

  19. SOLVING OPTIMAL ASSEMBLY LINE CONFIGURATION TASK BY MULTIOBJECTIVE DECISION MAKING METHODS

    Directory of Open Access Journals (Sweden)

    Ján ČABALA

    2017-06-01

    Full Text Available This paper deals with looking for the optimal configuration of automated assembly line model placed within Department of Cybernetics and Artificial Intelligence (DCAI. In order to solve this problem, Stateflow model of each configuration was created to simulate the behaviour of particular assembly line configuration. Outputs from these models were used as inputs into the multiobjective decision making process. Multi-objective decision-making methods were subsequently used to find the optimal configuration of assembly line. Paper describes the whole process of solving this task, from building the models to choosing the best configuration. Specifically, the problem was resolved using the experts’ evaluation method for evaluating the weights of every decision-making criterion, while the ELECTRE III, TOPSIS and AGREPREF methods were used for ordering the possible solutions from the most to the least suitable alternative. Obtained results were compared and final solution of this multi-objective decisionmaking problem is chosen.

  20. Situation-assessment and decision-aid production-rule analysis system for nuclear plant monitoring and emergency preparedness

    International Nuclear Information System (INIS)

    Gvillo, D.; Ragheb, M.; Parker, M.; Swartz, S.

    1987-01-01

    A Production-Rule Analysis System is developed for Nuclear Plant Monitoring. The signals generated by the Zion-1 Plant are considered. A Situation-Assessment and Decision-Aid capability is provided for monitoring the integrity of the Plant Radiation, the Reactor Coolant, the Fuel Clad, and the Containment Systems. A total of 41 signals are currently fed as facts to an Inference Engine functioning in the backward-chaining mode and built along the same structure as the E-Mycin system. The Goal-Tree constituting the Knowledge Base was generated using a representation in the form of Fault Trees deduced from plant procedures information. The system is constructed in support of the Data Analysis and Emergency Preparedness tasks at the Illinois Radiological Emergency Assessment Center (REAC)

  1. Situation-Assessment And Decision-Aid Production-Rule Analysis System For Nuclear Plant Monitoring And Emergency Preparedness

    Science.gov (United States)

    Gvillo, D.; Ragheb, M.; Parker, M.; Swartz, S.

    1987-05-01

    A Production-Rule Analysis System is developed for Nuclear Plant Monitoring. The signals generated by the Zion-1 Plant are considered. A Situation-Assessment and Decision-Aid capability is provided for monitoring the integrity of the Plant Radiation, the Reactor Coolant, the Fuel Clad, and the Containment Systems. A total of 41 signals are currently fed as facts to an Inference Engine functioning in the backward-chaining mode and built along the same structure as the E-Mycin system. The Goal-Tree constituting the Knowledge Base was generated using a representation in the form of Fault Trees deduced from plant procedures information. The system is constructed in support of the Data Analysis and Emergency Preparedness tasks at the Illinois Radiological Emergency Assessment Center (REAC).

  2. An Analysis of Information Structure and Optimal Transfer Pricing decision Rules for Decentralized Firms

    Science.gov (United States)

    1980-12-01

    Inventories and Work Force, Prentice-Hall, Englewood Cliffs, New Jersey (1960). Horngren , C. T. Cost Accounting : A Managerial Emphasis, Fourth Edition...achieved/received by virtue of AFIT performing the research. Can you estimate what this research would have cost if it had been accomplished under...future potential, or other value of this research. Please use the bottom part of this questionnaire for your statement( s ). NAME GRADE POSITION

  3. Empirical validation of a real options theory based method for optimizing evacuation decisions within chemical plants.

    Science.gov (United States)

    Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K

    2011-02-15

    This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. Decision-Aiding and Optimization for Vertical Navigation of Long-Haul Aircraft

    Science.gov (United States)

    Patrick, Nicholas J. M.; Sheridan, Thomas B.

    1996-01-01

    Most decisions made in the cockpit are related to safety, and have therefore been proceduralized in order to reduce risk. There are very few which are made on the basis of a value metric such as economic cost. One which can be shown to be value based, however, is the selection of a flight profile. Fuel consumption and flight time both have a substantial effect on aircraft operating cost, but they cannot be minimized simultaneously. In addition, winds, turbulence, and performance vary widely with altitude and time. These factors make it important and difficult for pilots to (a) evaluate the outcomes associated with a particular trajectory before it is flown and (b) decide among possible trajectories. The two elements of this problem considered here are: (1) determining what constitutes optimality, and (2) finding optimal trajectories. Pilots and dispatchers from major u.s. airlines were surveyed to determine which attributes of the outcome of a flight they considered the most important. Avoiding turbulence-for passenger comfort-topped the list of items which were not safety related. Pilots' decision making about the selection of flight profile on the basis of flight time, fuel burn, and exposure to turbulence was then observed. Of the several behavioral and prescriptive decision models invoked to explain the pilots' choices, utility maximization is shown to best reproduce the pilots' decisions. After considering more traditional methods for optimizing trajectories, a novel method is developed using a genetic algorithm (GA) operating on a discrete representation of the trajectory search space. The representation is a sequence of command altitudes, and was chosen to be compatible with the constraints imposed by Air Traffic Control, and with the training given to pilots. Since trajectory evaluation for the GA is performed holistically, a wide class of objective functions can be optimized easily. Also, using the GA it is possible to compare the costs associated with

  5. Derivation of optimal joint operating rules for multi-purpose multi-reservoir water-supply system

    Science.gov (United States)

    Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wang, Chao; Lei, Xiao-hui; Xiong, Yi-song; Zhang, Wei

    2017-08-01

    The derivation of joint operating policy is a challenging task for a multi-purpose multi-reservoir system. This study proposed an aggregation-decomposition model to guide the joint operation of multi-purpose multi-reservoir system, including: (1) an aggregated model based on the improved hedging rule to ensure the long-term water-supply operating benefit; (2) a decomposed model to allocate the limited release to individual reservoirs for the purpose of maximizing the total profit of the facing period; and (3) a double-layer simulation-based optimization model to obtain the optimal time-varying hedging rules using the non-dominated sorting genetic algorithm II, whose objectives were to minimize maximum water deficit and maximize water supply reliability. The water-supply system of Li River in Guangxi Province, China, was selected for the case study. The results show that the operating policy proposed in this study is better than conventional operating rules and aggregated standard operating policy for both water supply and hydropower generation due to the use of hedging mechanism and effective coordination among multiple objectives.

  6. A Multiswarm Optimizer for Distributed Decision Making in Virtual Enterprise Risk Management

    Directory of Open Access Journals (Sweden)

    Yichuan Shao

    2012-01-01

    Full Text Available We develop an optimization model for risk management in a virtual enterprise environment based on a novel multiswarm particle swarm optimizer called PS2O. The main idea of PS2O is to extend the single population PSO to the interacting multiswarms model by constructing hierarchical interaction topology and enhanced dynamical update equations. With the hierarchical interaction topology, a suitable diversity in the whole population can be maintained. At the same time, the enhanced dynamical update rule significantly speeds up the multiswarm to converge to the global optimum. With five mathematical benchmark functions, PS2O is proved to have considerable potential for solving complex optimization problems. PS2O is then applied to risk management in a virtual enterprise environment. Simulation results demonstrate that the PS2O algorithm is more feasible and efficient than the PSO algorithm in solving this real-world problem.

  7. Dimensions of design space: a decision-theoretic approach to optimal research design.

    Science.gov (United States)

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  8. A modeling framework for optimal long-term care insurance purchase decisions in retirement planning.

    Science.gov (United States)

    Gupta, Aparna; Li, Lepeng

    2004-05-01

    The level of need and costs of obtaining long-term care (LTC) during retired life require that planning for it is an integral part of retirement planning. In this paper, we divide retirement planning into two phases, pre-retirement and post-retirement. On the basis of four interrelated models for health evolution, wealth evolution, LTC insurance premium and coverage, and LTC cost structure, a framework for optimal LTC insurance purchase decisions in the pre-retirement phase is developed. Optimal decisions are obtained by developing a trade-off between post-retirement LTC costs and LTC insurance premiums and coverage. Two-way branching models are used to model stochastic health events and asset returns. The resulting optimization problem is formulated as a dynamic programming problem. We compare the optimal decision under two insurance purchase scenarios: one assumes that insurance is purchased for good and other assumes it may be purchased, relinquished and re-purchased. Sensitivity analysis is performed for the retirement age.

  9. Convergence of decision rules for value-based pricing of new innovative drugs.

    Science.gov (United States)

    Gandjour, Afschin

    2015-04-01

    Given the high costs of innovative new drugs, most European countries have introduced policies for price control, in particular value-based pricing (VBP) and international reference pricing. The purpose of this study is to describe how profit-maximizing manufacturers would optimally adjust their launch sequence to these policies and how VBP countries may best respond. To decide about the launching sequence, a manufacturer must consider a tradeoff between price and sales volume in any given country as well as the effect of price in a VBP country on the price in international reference pricing countries. Based on the manufacturer's rationale, it is best for VBP countries in Europe to implicitly collude in the long term and set cost-effectiveness thresholds at the level of the lowest acceptable VBP country. This way, international reference pricing countries would also converge towards the lowest acceptable threshold in Europe.

  10. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.

    2018-02-12

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

  11. Design and optimization of a ground water monitoring system using GIS and multicriteria decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, D.; Gupta, A.D.; Ramnarong, V.

    1998-12-31

    A GIS-based methodology has been developed to design a ground water monitoring system and implemented for a selected area in Mae-Klong River Basin, Thailand. A multicriteria decision-making analysis has been performed to optimize the network system based on major criteria which govern the monitoring network design such as minimization of cost of construction, reduction of kriging standard deviations, etc. The methodology developed in this study is a new approach to designing monitoring networks which can be used for any site considering site-specific aspects. It makes it possible to choose the best monitoring network from various alternatives based on the prioritization of decision factors.

  12. AngelStow: A Commercial Optimization-Based Decision Support Tool for Stowage Planning

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto; Jensen, Rune Møller; Guilbert, Nicolas

    save port fees, optimize use of vessel capacity, and reduce bunker consumption. Stowage Coordinators (SCs) produce these plans manually with the help of graphical tools, but high-quality SPs are hard to generate with the limited support they provide. In this abstract, we introduce AngelStow which...... is a commercial optimization-based decision support tool for stowing container vessels developed in collaboration between Ange Optimization and The IT University of Copenhagen. The tool assists SCs in the process of generating SPs interactively, focusing on satisfying and optimizing constraints and objectives...... that are tedious to deal with for humans, while letting the SCs use their expertise to deal with hard combinatorial objectives and corner cases....

  13. Optimal management of replacement heifers in a beef herd: a model for simultaneous optimization of rearing and breeding decisions.

    Science.gov (United States)

    Stygar, A H; Kristensen, A R; Makulska, J

    2014-08-01

    The aim of this study was to provide farmers an efficient tool for supporting optimal decisions in the beef heifer rearing process. The complexity of beef heifer management prompted the development of a model including decisions on the feeding level during prepuberty (age optimal rearing strategy was found by maximizing the total discounted net revenues from the predicted future productivity of the Polish Limousine heifers defined as the cumulative BW of calves born from a cow calved until the age of 5 yr, standardized on the 210th day of age. According to the modeled optimal policy, heifers fed during the whole rearing period at the ADG of 810 g/d and generally weaned after the maximum suckling period of 9 mo should already be bred at the age of 13.2 mo and BW constituting 55.6% of the average mature BW. Based on the optimal strategy, 52% of all heifers conceived from May to July and calved from February to April. This optimal rearing pattern resulted in an average net return of EUR 311.6 per pregnant heifer. It was found that the economic efficiency of beef operations can be improved by applying different herd management practices to those currently used in Poland. Breeding at 55.6% of the average mature BW, after a shorter and less expensive rearing period, resulted in an increase in the average net return per heifer by almost 18% compared to the conventional system, in which heifers were bred after attaining 65% of the mature BW. Extension of the rearing period by 2.5 mo (breeding at the age 15.7 mo), due to a prepubertal growth rate lowered by 200 g, reduced the average net return per heifer by 6.2% compared to the results obtained under the basic model assumptions. In the future, the model may also be extended to investigate additional aspects of the beef heifer development, such as the environmental impacts of various heifer management decisions.

  14. Optimal Power Flow Using Gbest-Guided Cuckoo Search Algorithm with Feedback Control Strategy and Constraint Domination Rule

    Directory of Open Access Journals (Sweden)

    Gonggui Chen

    2017-01-01

    Full Text Available The optimal power flow (OPF is well-known as a significant optimization tool for the security and economic operation of power system, and OPF problem is a complex nonlinear, nondifferentiable programming problem. Thus this paper proposes a Gbest-guided cuckoo search algorithm with the feedback control strategy and constraint domination rule which is named as FCGCS algorithm for solving OPF problem and getting optimal solution. This FCGCS algorithm is guided by the global best solution for strengthening exploitation ability. Feedback control strategy is devised to dynamically regulate the control parameters according to actual and specific feedback value in the simulation process. And the constraint domination rule can efficiently handle inequality constraints on state variables, which is superior to traditional penalty function method. The performance of FCGCS algorithm is tested and validated on the IEEE 30-bus and IEEE 57-bus example systems, and simulation results are compared with different methods obtained from other literatures recently. The comparison results indicate that FCGCS algorithm can provide high-quality feasible solutions for different OPF problems.

  15. Optimal monetary policy rules: the problem of stability under heterogeneous learning

    Czech Academy of Sciences Publication Activity Database

    Bogomolova, Anna; Kolyuzhnov, Dmitri

    -, č. 379 (2008), s. 1-34 ISSN 1211-3298 R&D Projects: GA MŠk LC542 Institutional research plan: CEZ:AV0Z70850503 Keywords : monetary policy rules * New Keynesian model * adaptive learning Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp379.pdf

  16. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  17. SIMPLE DECISION RULES REDUCE REINJURY RISK AFTER ANTERIOR CRUCIATE LIGAMENT RECONSTRUCTION

    Science.gov (United States)

    Grindem, Hege; Snyder-Mackler, Lynn; Moksnes, Håvard; Engebretsen, Lars; Risberg, May Arna

    2016-01-01

    Background Knee reinjury after anterior cruciate ligament (ACL) reconstruction is common and increases the risk of osteoarthritis. There is sparse evidence to guide return to sport (RTS) decisions in this population. Objectives To assess the relationship between knee reinjury after ACL reconstruction and 1) return to level I sports, 2) timing of return to sports, and 3) knee function prior to return. Methods 106 patients who participated in pivoting sports participated in this prospective two year cohort study. Sports participation and knee reinjury were recorded monthly. Knee function was assessed with the Knee Outcome Survey–Activities of Daily Living Scale, global rating scale of function, and quadriceps strength and hop test symmetry. Pass RTS criteria was defined as scores >90 on all tests, failure as failing any. Results Patients who returned to level I sports had 4.32 (p=0.048) higher reinjury rate than those who did not. The reinjury rate was significantly reduced by 51 % for each month RTS was delayed until 9 months after surgery, after which no further risk reduction was observed. 38.2 % of those who failed RTS criteria suffered reinjuries versus 5.6 % of those who passed (HR: 0.16, p=0.075). More symmetrical quadriceps strength prior to return significantly reduced the knee reinjury rate. Conclusion Returning to level I sports after ACL reconstruction leads to a more than 4-fold increase in reinjury rates over 2 years. Return to sport 9 months or later after surgery and more symmetrical quadriceps strength prior to return substantially reduces the reinjury rate. PMID:27162233

  18. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  19. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    OpenAIRE

    Rajesh Kumar; S.C. Kaushik; Raj Kumar; Ranjana Hans

    2016-01-01

    Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is s...

  20. Decision theoretical justification of optimization criteria for near-real-time accountancy procedures

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1992-01-01

    In the beginning of nuclear material safeguards, emphasis was placed on safe detection of diversion of nuclear material. Later, the aspect of timely detection became equally important. Since there is a trade-off between these two objectives, the question of an appropriate compromise was raised. In this paper, a decision theoretical framework is presented in which the objectives of the two players, inspector and inspectee, are expressed in terms of general utility functions. Within this framework, optimal safeguards strategies are defined, and furthermore, conditions are formulated under which the optimization criteria corresponding to the objectives mentioned above can be justified

  1. Multiobjective Optimization of Aircraft Maintenance in Thailand Using Goal Programming: A Decision-Support Model

    Directory of Open Access Journals (Sweden)

    Yuttapong Pleumpirom

    2012-01-01

    Full Text Available The purpose of this paper is to develop the multiobjective optimization model in order to evaluate suppliers for aircraft maintenance tasks, using goal programming. The authors have developed a two-step process. The model will firstly be used as a decision-support tool for managing demand, by using aircraft and flight schedules to evaluate and generate aircraft-maintenance requirements, including spare-part lists. Secondly, they develop a multiobjective optimization model by minimizing cost, minimizing lead time, and maximizing the quality under various constraints in the model. Finally, the model is implemented in the actual airline's case.

  2. The Bayesian statistical decision theory applied to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-11-01

    The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs

  3. Development of Decision-Making Automated System for Optimal Placement of Physical Access Control System’s Elements

    Science.gov (United States)

    Danilova, Olga; Semenova, Zinaida

    2018-04-01

    The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.

  4. Weather Avoidance Using Route Optimization as a Decision Aid: An AWIN Topical Study. Phase 1

    Science.gov (United States)

    1998-01-01

    The aviation community is faced with reducing the fatal aircraft accident rate by 80 percent within 10 years. This must be achieved even with ever increasing, traffic and a changing National Airspace System. This is not just an altruistic goal, but a real necessity, if our growing level of commerce is to continue. Honeywell Technology Center's topical study, "Weather Avoidance Using Route Optimization as a Decision Aid", addresses these pressing needs. The goal of this program is to use route optimization and user interface technologies to develop a prototype decision aid for dispatchers and pilots. This decision aid will suggest possible diversions through single or multiple weather hazards and present weather information with a human-centered design. At the conclusion of the program, we will have a laptop prototype decision aid that will be used to demonstrate concepts to industry for integration into commercialized products for dispatchers and/or pilots. With weather a factor in 30% of aircraft accidents, our program will prevent accidents by strategically avoiding weather hazards in flight. By supplying more relevant weather information in a human-centered format along with the tools to generate flight plans around weather, aircraft exposure to weather hazards can be reduced. Our program directly addresses the NASA's five year investment areas of Strategic Weather Information and Weather Operations (simulation/hazard characterization and crew/dispatch/ATChazard monitoring, display, and decision support) (NASA Aeronautics Safety Investment Strategy: Weather Investment Recommendations, April 15, 1997). This program is comprised of two phases, Phase I concluded December 31, 1998. This first phase defined weather data requirements, lateral routing algorithms, an conceptual displays for a user-centered design. Phase II runs from January 1999 through September 1999. The second phase integrates vertical routing into the lateral optimizer and combines the user

  5. Inside the black box: Starting to uncover the underlying decision rules used in a one-by-one expert assessment of occupational exposure in case-control studies

    NARCIS (Netherlands)

    Wheeler, D.C.; Burstyn, I.; Vermeulen, R.; Yu, K.; Shortreed, S.M.; Pronk, A.; Stewart, P.A.; Colt, J.S.; Baris, D.; Karagas, M.R.; Schwenn, M.; Johnson, A.; Silverman, D.T.; Friesen, M.C.

    2013-01-01

    Objectives Evaluating occupational exposures in population-based case-control studies often requires exposure assessors to review each study participant's reported occupational information job-by-job to derive exposure estimates. Although such assessments likely have underlying decision rules, they

  6. 20 CFR 418.1355 - What are the rules for reopening a decision by an administrative law judge of the Office of...

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What are the rules for reopening a decision by an administrative law judge of the Office of Medicare Hearings and Appeals (OMHA) or by the Medicare Appeals Council (MAC)? 418.1355 Section 418.1355 Employees' Benefits SOCIAL SECURITY ADMINISTRATION MEDICARE SUBSIDIES Medicare Part B...

  7. The Cat and the Pigeons: Some General Comments on (TP) Tax Rulings and State Aid After the Starbucks and Fiat Decisions

    NARCIS (Netherlands)

    Wattel, P.J.; Richelle, I.; Schön, W.; Traversa, E.

    2016-01-01

    The Commission State aid decisions on individual tax rulings have created legal uncertainty, which may have been one of their goals. This article comments on their political and policy merits and effects, it wonders whether EU law requires member States to have—and apply in a certain manner—specific

  8. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  9. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    Science.gov (United States)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  10. Optimal Financing Decisions of Two Cash-Constrained Supply Chains with Complementary Products

    Directory of Open Access Journals (Sweden)

    Yuting Li

    2016-04-01

    Full Text Available In recent years; financing difficulties have been obsessed small and medium enterprises (SMEs; especially emerging SMEs. Inter-members’ joint financing within a supply chain is one of solutions for SMEs. How about members’ joint financing of inter-supply chains? In order to answer the question, we firstly employ the Stackelberg game to propose three kinds of financing decision models of two cash-constrained supply chains with complementary products. Secondly, we analyze qualitatively these models and find the joint financing decision of the two supply chains is the most optimal one. Lastly, we conduct some numerical simulations not only to illustrate above results but also to find that the larger are cross-price sensitivity coefficients; the higher is the motivation for participants to make joint financing decisions; and the more are profits for them to gain.

  11. Optimal Rule-Based Power Management for Online, Real-Time Applications in HEVs with Multiple Sources and Objectives: A Review

    Directory of Open Access Journals (Sweden)

    Bedatri Moulik

    2015-08-01

    Full Text Available The field of hybrid vehicles has undergone intensive research and development, primarily due to the increasing concern of depleting resources and increasing pollution. In order to investigate further options to optimize the performance of hybrid vehicles with regards to different criteria, such as fuel economy, battery aging, etc., a detailed state-of-the-art review is presented in this contribution. Different power management and optimization techniques are discussed focusing on rule-based power management and multi-objective optimization techniques. The extent of rule-based power management and optimization in solving battery aging issues is investigated along with an implementation in real-time driving scenarios where no pre-defined drive cycle is followed. The goal of this paper is to illustrate the significance and applications of rule-based power management optimization based on previous contributions.

  12. Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making.

    Directory of Open Access Journals (Sweden)

    Ritwik K Niyogi

    experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.

  13. A multi-criteria optimization and decision-making approach for improvement of food engineering processes

    Directory of Open Access Journals (Sweden)

    Alik Abakarov

    2013-04-01

    Full Text Available The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demonstrated using experimental data obtained on osmotic dehydration of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses, namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality. Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP and the Tabular Method (TM, were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

  14. A Dynamic Intelligent Decision Approach to Dependency Modeling of Project Tasks in Complex Engineering System Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2013-01-01

    Full Text Available Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.

  15. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    Science.gov (United States)

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    Science.gov (United States)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  17. Making optimal investment decisions for energy service companies under uncertainty: A case study

    International Nuclear Information System (INIS)

    Deng, Qianli; Jiang, Xianglin; Zhang, Limao; Cui, Qingbin

    2015-01-01

    Varied initial energy efficiency investments would result in different annual energy savings achievements. In order to balance the savings revenue and the potential capital loss through EPC (Energy Performance Contracting), a cost-effective investment decision is needed when selecting energy efficiency technologies. In this research, an approach is developed for the ESCO (Energy Service Company) to evaluate the potential energy savings profit, and thus make the optimal investment decisions. The energy savings revenue under uncertainties, which are derived from energy efficiency performance variation and energy price fluctuation, are first modeled as stochastic processes. Then, the derived energy savings profit is shared by the owner and the ESCO according to the contract specification. A simulation-based model is thus built to maximize the owner's profit, and at the same time, satisfy the ESCO's expected rate of return. In order to demonstrate the applicability of the proposed approach, the University of Maryland campus case is also presented. The proposed method could not only help the ESCO determine the optimal energy efficiency investments, but also assist the owner's decision in the bidding selection. - Highlights: • An optimization model is built for determining energy efficiency investment for ESCO. • Evolution of the energy savings revenue is modeled as a stochastic process. • Simulation is adopted to calculate investment balancing the owner and the ESCO's profit. • A campus case is presented to demonstrate applicability of the proposed approach

  18. Merit-Based Incentive Payment System: Meaningful Changes in the Final Rule Brings Cautious Optimism.

    Science.gov (United States)

    Manchikanti, Laxmaiah; Helm Ii, Standiford; Calodney, Aaron K; Hirsch, Joshua A

    2017-01-01

    The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) eliminated the flawed Sustainable Growth Rate (SGR) act formula - a longstanding crucial issue of concern for health care providers and Medicare beneficiaries. MACRA also included a quality improvement program entitled, "The Merit-Based Incentive Payment System, or MIPS." The proposed rule of MIPS sought to streamline existing federal quality efforts and therefore linked 4 distinct programs into one. Three existing programs, meaningful use (MU), Physician Quality Reporting System (PQRS), value-based payment (VBP) system were merged with the addition of Clinical Improvement Activity category. The proposed rule also changed the name of MU to Advancing Care Information, or ACI. ACI contributes to 25% of composite score of the four programs, PQRS contributes 50% of the composite score, while VBP system, which deals with resource use or cost, contributes to 10% of the composite score. The newest category, Improvement Activities or IA, contributes 15% to the composite score. The proposed rule also created what it called a design incentive that drives movement to delivery system reform principles with the inclusion of Advanced Alternative Payment Models (APMs).Following the release of the proposed rule, the medical community, as well as Congress, provided substantial input to Centers for Medicare and Medicaid Services (CMS),expressing their concern. American Society of Interventional Pain Physicians (ASIPP) focused on 3 important aspects: delay the implementation, provide a 3-month performance period, and provide ability to submit meaningful quality measures in a timely and economic manner. The final rule accepted many of the comments from various organizations, including several of those specifically emphasized by ASIPP, with acceptance of 3-month reporting period, as well as the ability to submit non-MIPS measures to improve real quality and make the system meaningful. CMS also provided a mechanism for

  19. A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning

    Science.gov (United States)

    Basdekas, L.; Stewart, N.; Triana, E.

    2013-12-01

    Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU

  20. Preferences of the Central Reserve Bank of Peru and optimal monetary rules in the inflation targeting regime

    Directory of Open Access Journals (Sweden)

    Nilda Mercedes Cabrera Pasca

    2012-03-01

    Full Text Available This study aims to identify the preferences of the monetary authority in the Peruvian regime of inflation targeting through the derivation of optimal monetary rules. To achieve that, we used a calibration strategy based on the choice of values of the parameters of preferences that minimize the square deviation between the true interest rate and interest rate optimal simulation. The results showed that the monetary authority has applied a system of flexible inflation targeting, prioritizing the stabilization of inflation, but without disregarding gradualism in interest rates. On the other hand, concern over output stabilization has been minimal, revealing that the output gap has been important because it contains information about future inflation and not because it is considered a variable goal in itself. Finally, when the smoothing of the nominal exchange rate is considered in the loss function of the monetary authority, the rank order of preferences has been maintained and the smoothing of the exchange rate proved insignificant.

  1. Optimal reliability design for over-actuated systems based on the MIT rule: Application to an octocopter helicopter testbed

    International Nuclear Information System (INIS)

    Chamseddine, Abbas; Theilliol, Didier; Sadeghzadeh, Iman; Zhang, Youmin; Weber, Philippe

    2014-01-01

    This paper addresses the problem of optimal reliability in over-actuated systems. Overloading an actuator decreases its overall lifetime and reduces its average performance over a long time. Therefore, performance and reliability are two conflicting requirements. While appropriate reliability is related to average loads, good performance is related to fast response and sufficient loads generated by actuators. Actuator redundancy allows us to address both performance and reliability at the same time by properly allocating desired loads among redundant actuators. The main contribution of this paper is the on-line optimization of the overall plant reliability according to performance objective using an MIT (Massachusetts Institute of Technology) rule-based method. The effectiveness of the proposed method is illustrated through an experimental application to an octocopter helicopter testbed

  2. A PMBGA to Optimize the Selection of Rules for Job Shop Scheduling Based on the Giffler-Thompson Algorithm

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2012-01-01

    Full Text Available Most existing research on the job shop scheduling problem has been focused on the minimization of makespan (i.e., the completion time of the last job. However, in the fiercely competitive market nowadays, delivery punctuality is more important for maintaining a high service reputation. So in this paper, we aim at solving job shop scheduling problems with the total weighted tardiness objective. Several dispatching rules are adopted in the Giffler-Thompson algorithm for constructing active schedules. It is noticeable that the rule selections for scheduling consecutive operations are not mutually independent but actually interrelated. Under such circumstances, a probabilistic model-building genetic algorithm (PMBGA is proposed to optimize the sequence of selected rules. First, we use Bayesian networks to model the distribution characteristics of high-quality solutions in the population. Then, the new generation of individuals is produced by sampling the established Bayesian network. Finally, some elitist individuals are further improved by a special local search module based on parameter perturbation. The superiority of the proposed approach is verified by extensive computational experiments and comparisons.

  3. Preventive maintenance: optimization of time - based discard decisions at the bruce nuclear generating station

    International Nuclear Information System (INIS)

    Doyle, E.K.; Jardine, A.K.S.

    2001-01-01

    The use of various maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems. As previously reported at ICONE 6 in New Orleans, 1996, several innovative practices reduced Reliability Centered Maintenance costs while maintaining the accuracy of the analysis. The optimization strategy has undergone further evolution and at the present an Integrated Maintenance Program (IMP) is in place where an Expert Panel consisting of all players/experts proceed through each system in a disciplined fashion and reach agreement on all items under a rigorous time frame. It is well known that there are essentially 3 maintenance based actions that can flow from a Maintenance Optimization Analysis: condition based maintenance, time based maintenance and time based discard. The present effort deals with time based discard decisions. Maintenance data from the Remote On-Power Fuel Changing System was used. (author)

  4. A complex systems approach to planning, optimization and decision making for energy networks

    International Nuclear Information System (INIS)

    Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim

    2008-01-01

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock

  5. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  6. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  7. Optimization of warehouse location through fuzzy multi-criteria decision making methods

    Directory of Open Access Journals (Sweden)

    C. L. Karmaker

    2015-07-01

    Full Text Available Strategic warehouse location-allocation problem is a multi-staged decision-making problem having both numerical and qualitative criteria. In order to survive in the global business scenario by improving supply chain performance, companies must examine the cross-functional drivers in the optimization of logistic systems. A meticulous observation makes evident that strategy warehouse location selection has become challenging as the number of alternatives and conflicting criteria increases. The issue becomes particularly problematic when the conventional concept has been applied in dealing with the imprecise nature of the linguistic assessment. The qualitative decisions for selection process are often complicated by the fact that often it is imprecise for the decision makers. Such problem must be overcome with defined efforts. Fuzzy multi-criteria decision making methods have been used in this research as aids in making location-allocation decisions. The anticipated methods in this research consist of two steps at its core. In the first step, the criteria of the existing problem are inspected and identified and then the weights of the sector and subsector are determined that have come to light by using Fuzzy AHP. In the second step, eligible alternatives are ranked by using TOPSIS and Fuzzy TOPSIS comparatively. A demonstration of the application of these methodologies in a real life problem is presented.

  8. Rule of five in 2015 and beyond: Target and ligand structural limitations, ligand chemistry structure and drug discovery project decisions.

    Science.gov (United States)

    Lipinski, Christopher A

    2016-06-01

    The rule of five (Ro5), based on physicochemical profiles of phase II drugs, is consistent with structural limitations in protein targets and the drug target ligands. Three of four parameters in Ro5 are fundamental to the structure of both target and drug binding sites. The chemical structure of the drug ligand depends on the ligand chemistry and design philosophy. Two extremes of chemical structure and design philosophy exist; ligands constructed in the medicinal chemistry synthesis laboratory without input from natural selection and natural product (NP) metabolites biosynthesized based on evolutionary selection. Exceptions to Ro5 are found mostly among NPs. Chemistry chameleon-like behavior of some NPs due to intra-molecular hydrogen bonding as exemplified by cyclosporine A is a strong contributor to NP Ro5 outliers. The fragment derived, drug Navitoclax is an example of the extensive expertise, resources, time and key decisions required for the rare discovery of a non-NP Ro5 outlier. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. The role of situation assessment and flight experience in pilots' decisions to continue visual flight rules flight into adverse weather.

    Science.gov (United States)

    Wiegmann, Douglas A; Goh, Juliana; O'Hare, David

    2002-01-01

    Visual flight rules (VFR) flight into instrument meteorological conditions (IMC) is a major safety hazard in general aviation. In this study we examined pilots' decisions to continue or divert from a VFR flight into IMC during a dynamic simulation of a cross-country flight. Pilots encountered IMC either early or later into the flight, and the amount of time and distance pilots flew into the adverse weather prior to diverting was recorded. Results revealed that pilots who encountered the deteriorating weather earlier in the flight flew longer into the weather prior to diverting and had more optimistic estimates of weather conditions than did pilots who encountered the deteriorating weather later in the flight. Both the time and distance traveled into the weather prior to diverting were negatively correlated with pilots' previous flight experience. These findings suggest that VFR flight into IMC may be attributable, at least in part, to poor situation assessment and experience rather than to motivational judgment that induces risk-taking behavior as more time and effort are invested in a flight. Actual or potential applications of this research include the design of interventions that focus on improving weather evaluation skills in addition to addressing risk-taking attitudes.

  10. Using Decision-Analytic Modeling to Isolate Interventions That Are Feasible, Efficient and Optimal: An Application from the Norwegian Cervical Cancer Screening Program.

    Science.gov (United States)

    Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø

    2015-12-01

    Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.

  11. Review of models and actors in energy mix optimization – can leader visions and decisions align with optimum model strategies for our future energy systems?

    NARCIS (Netherlands)

    Weijermars, R.; Taylor, P.; Bahn, O.; Das, S.R.; Wei, Y.M.

    2011-01-01

    Organizational behavior and stakeholder processes continually influence energy strategy choices and decisions. Although theoretical optimizations can provide guidance for energy mix decisions from a pure physical systems engineering point of view, these solutions might not be optimal from a

  12. Optimal selling rules for monetary invariant criteria: tracking the maximum of a portfolio with negative drift

    OpenAIRE

    Elie, Romuald; Espinosa, Gilles-Edouard

    2013-01-01

    Considering a positive portfolio diffusion $X$ with negative drift, we investigate optimal stopping problems of the form $$ \\inf_\\theta \\Esp{f\\left(\\frac{X_\\theta}{\\Sup_{s\\in[0,\\tau]}{X_s}}\\right)}\\;,$$ where $f$ is a non-increasing function, $\\tau$ is the next random time where the portfolio $X$ crosses zero and $\\theta$ is any stopping time smaller than $\\tau$. Hereby, our motivation is the obtention of an optimal selling strategy minimizing the relative distance between the liquidation val...

  13. Trading river services: optimizing dam decisions at the basin scale to improve socio-ecological resilience

    Science.gov (United States)

    Roy, S. G.; Gold, A.; Uchida, E.; McGreavy, B.; Smith, S. M.; Wilson, K.; Blachly, B.; Newcomb, A.; Hart, D.; Gardner, K.

    2017-12-01

    Dam removal has become a cornerstone of environmental restoration practice in the United States. One outcome of dam removal that has received positive attention is restored access to historic habitat for sea-run fisheries, providing a crucial gain in ecosystem resilience. But dams also provide stakeholders with valuable services, and uncertain socio-ecological outcomes can arise if there is not careful consideration of the basin scale trade offs caused by dam removal. In addition to fisheries, dam removals can significantly affect landscape nutrient flux, municipal water storage, recreational use of lakes and rivers, property values, hydroelectricity generation, the cultural meaning of dams, and many other river-based ecosystem services. We use a production possibility frontiers approach to explore dam decision scenarios and opportunities for trading between ecosystem services that are positively or negatively affected by dam removal in New England. Scenarios that provide efficient trade off potentials are identified using a multiobjective genetic algorithm. Our results suggest that for many river systems, there is a significant potential to increase the value of fisheries and other ecosystem services with minimal dam removals, and further increases are possible by including decisions related to dam operations and physical modifications. Run-of-river dams located near the head of tide are often found to be optimal for removal due to low hydroelectric capacity and high impact on fisheries. Conversely, dams with large impoundments near a river's headwaters can be less optimal for dam removal because their value as nitrogen sinks often outweighs the potential value for fisheries. Hydropower capacity is negatively impacted by dam removal but there are opportunities to meet or exceed lost capacity by upgrading preserved hydropower dams. Improving fish passage facilities for dams that are critical for safety or water storage can also reduce impacts on fisheries. Our

  14. The decision optimization of product development by considering the customer demand saturation

    Directory of Open Access Journals (Sweden)

    Qing-song Xing

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to analyze the impacts of over meeting customer demands on the product development process, which is on the basis of the quantitative model of customer demands, development cost and time. Then propose the corresponding product development optimization decision. Design/methodology/approach: First of all, investigate to obtain the customer demand information, and then quantify customer demands weights by using variation coefficient method. Secondly, analyses the relationship between customer demands and product development time and cost based on the quality function deployment and establish corresponding mathematical model. On this basis, put forward the concept of customer demand saturation and optimization decision method of product development, and then apply it in the notebook development process of a company. Finally, when customer demand is saturated, it also needs to prove the consistency of strengthening satisfies customer demands and high attention degree customer demands, and the stability of customer demand saturation under different parameters. Findings: The development cost and the time will rise sharply when over meeting the customer demand. On the basis of considering the customer demand saturation, the relationship between customer demand and development time cost is quantified and balanced. And also there is basically consistent between the sequence of meeting customer demands and customer demands survey results. Originality/value: The paper proposes a model of customer demand saturation. It proves the correctness and effectiveness on the product development decision method.

  15. OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.

    Science.gov (United States)

    Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein

    2018-01-01

    Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.

  16. A set of rules for constructing an admissible set of D optimal exact ...

    African Journals Online (AJOL)

    In the search for a D-optimal exact design using the combinatorial iterative technique introduced by Onukogu and Iwundu, 2008, all the support points that make up the experimental region are grouped into H concentric balls according to their distances from the centre. Any selection of N support points from the balls defines ...

  17. Optimization of urban water supply portfolios combining infrastructure capacity expansion and water use decisions

    Science.gov (United States)

    Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.

    2015-12-01

    The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.

  18. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    Science.gov (United States)

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  19. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    Directory of Open Access Journals (Sweden)

    Dezhi Zhang

    2014-01-01

    Full Text Available This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users’ demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators’ service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  20. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    Science.gov (United States)

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  1. Electricity Purchase Optimization Decision Based on Data Mining and Bayesian Game

    Directory of Open Access Journals (Sweden)

    Yajing Gao

    2018-04-01

    Full Text Available The openness of the electricity retail market results in the power retailers facing fierce competition in the market. This article aims to analyze the electricity purchase optimization decision-making of each power retailer with the background of the big data era. First, in order to guide the power retailer to make a purchase of electricity, this paper considers the users’ historical electricity consumption data and a comprehensive consideration of multiple factors, then uses the wavelet neural network (WNN model based on “meteorological similarity day (MSD” to forecast the user load demand. Second, in order to guide the quotation of the power retailer, this paper considers the multiple factors affecting the electricity price to cluster the sample set, and establishes a Genetic algorithm- back propagation (GA-BP neural network model based on fuzzy clustering (FC to predict the short-term market clearing price (MCP. Thirdly, based on Sealed-bid Auction (SA in game theory, a Bayesian Game Model (BGM of the power retailer’s bidding strategy is constructed, and the optimal bidding strategy is obtained by obtaining the Bayesian Nash Equilibrium (BNE under different probability distributions. Finally, a practical example is proposed to prove that the model and method can provide an effective reference for the decision-making optimization of the sales company.

  2. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    Science.gov (United States)

    Zhang, Dezhi; Li, Shuangyan

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  3. Impact of the New Optimal Rules for Arbitration of Disputers Relating to Space Debris Controversies

    Science.gov (United States)

    Force, Melissa K.

    2013-09-01

    The mechanisms and procedures for settlement of disputes arising from space debris collision damage, such as that suffered by the Russian Cosmos and US Iridium satellites in 2009, are highly political, nonbinding and unpredictable - all of which contributes to the uncertainty that increases the costs of financing and insuring those endeavors that take place in near-Earth space, especially in Low Earth Orbit. Dispute settlement mechanisms can be found in the 1967 Outer Space Treaty, which provides for consultations in cases involving potentially harmful interference with activities of States parties, and in the 1972 Liability Convention which permits but does not require States - not non-governmental entities - to pursue claims in a resolution process that is nonbinding (unless otherwise agreed.) There are soft- law mechanisms to control the growth of space debris, such as the voluntary 2008 United Nations Space Debris Mitigation Guidelines, and international law and the principles of equity and justice generally provide reparation to restore a person, State or organization to the condition which would have existed if damage had not occurred, but only if all agree to a specific tribunal or international court; even then, parties may be bound by the result only if agreed and enforcement of the award internationally remains uncertain. In all, the dispute resolution process for damage resulting from inevitable future damage from space debris collisions is highly unsatisfactory. However, the Administrative Council of the Permanent Court of Arbitration's recently adopted Optional Rules for the Arbitration of Disputes Relating to Outer Space Activities are, as of yet, untested, and this article will provide an overview of the process, explore the ways in which they fill in gaps in the previous patchwork of systems and analyze the benefits and shortcomings of the new Outer Space Optional Rules.

  4. Optimization of Aeroengine Shop Visit Decisions Based on Remaining Useful Life and Stochastic Repair Time

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2016-01-01

    Full Text Available Considering the wide application of condition-based maintenance in aeroengine maintenance practice, it becomes possible for aeroengines to carry out their preventive maintenance in just-in-time (JIT manner by reasonably planning their shop visits (SVs. In this study, an approach is proposed to make aeroengine SV decisions following the concept of JIT. Firstly, a state space model (SSM for aeroengine based on exhaust gas temperature margin is developed to predict the remaining useful life (RUL of aeroengine. Secondly, the effect of SV decisions on risk and service level (SL is analyzed, and an optimization of the aeroengine SV decisions based on RUL and stochastic repair time is performed to carry out JIT manner with the requirement of safety and SL. Finally, a case study considering two CFM-56 aeroengines is presented to demonstrate the proposed approach. The results show that predictive accuracy of RUL with SSM is higher than with linear regression, and the process of SV decisions is simple and feasible for airlines to improve the inventory management level of their aeroengines.

  5. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  6. Study on optimized decision-making model of offshore wind power projects investment

    Science.gov (United States)

    Zhao, Tian; Yang, Shangdong; Gao, Guowei; Ma, Li

    2018-02-01

    China’s offshore wind energy is of great potential and plays an important role in promoting China’s energy structure adjustment. However, the current development of offshore wind power in China is inadequate, and is much less developed than that of onshore wind power. On the basis of considering all kinds of risks faced by offshore wind power development, an optimized model of offshore wind power investment decision is established in this paper by proposing the risk-benefit assessment method. To prove the practicability of this method in improving the selection of wind power projects, python programming is used to simulate the investment analysis of a large number of projects. Therefore, the paper is dedicated to provide decision-making support for the sound development of offshore wind power industry.

  7. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  8. A note on “An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems”

    OpenAIRE

    R. Venkata Rao

    2012-01-01

    A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...

  9. Optimal decisions and comparison of VMI and CPFR under price-sensitive uncertain demand

    Directory of Open Access Journals (Sweden)

    Yasaman Kazemi

    2013-06-01

    Full Text Available Purpose: The purpose of this study is to compare the performance of two advanced supply chain coordination mechanisms, Vendor Managed Inventory (VMI and Collaborative Planning Forecasting and Replenishment (CPFR, under a price-sensitive uncertain demand environment, and to make the optimal decisions on retail price and order quantity for both mechanisms. Design/ methodology/ approach: Analytical models are first applied to formulate a profit maximization problem; furthermore, by applying simulation optimization solution procedures, the optimal decisions and performance comparisons are accomplished. Findings: The results of the case study supported the widely held view that more advanced coordination mechanisms yield greater supply chain profit than less advanced ones. Information sharing does not only increase the supply chain profit, but also is required for the coordination mechanisms to achieve improved performance. Research limitations/implications: This study considers a single vendor and a single retailer in order to simplify the supply chain structure for modeling. Practical implications: Knowledge obtained from this study about the conditions appropriate for each specific coordination mechanism and the exact functions of coordination programs is critical to managerial decisions for industry practitioners who may apply the coordination mechanisms considered. Originality/value: This study includes the production cost in Economic Order Quantity (EOQ equations and combines it with price-sensitive demand under stochastic settings while comparing VMI and CPFR supply chain mechanisms and maximizing the total profit. Although many studies have worked on information sharing within the supply chain, determining the performance measures when the demand is price-sensitive and stochastic was not reported by researchers in the past literature.

  10. Decision-Making Approach to Selecting Optimal Platform of Service Variants

    Directory of Open Access Journals (Sweden)

    Vladimir Modrak

    2016-01-01

    Full Text Available Nowadays, it is anticipated that service sector companies will be inspired to follow mass customization trends of industrial sector. However, services are more abstract than products and therefore concepts for mass customization in manufacturing domain cannot be transformed without a methodical change. This paper is focused on the development of a methodological framework to support decisions in a selection of optimal platform of service variants when compatibility problems between service options occurred. The approach is based on mutual relations between waste and constrained design space entropy. For this purpose, software for quantification of constrained and waste design space is developed. Practicability of the methodology is presented on a realistic case.

  11. A Modified Bird-Mating Optimization with Hill-Climbing for Connection Decisions of Transformers

    Directory of Open Access Journals (Sweden)

    Ting-Chia Ou

    2016-08-01

    Full Text Available This paper endeavors to apply a hybrid bird-mating optimization approach to connection decisions of distribution transformers. It is expected that with the aid of hybrid bird-mating approach, the voltage imbalance and deviation can be mitigated, hence ensuring a satisfactory supplying power more effectively. To evaluate the effectiveness of this method, it has been tested through practical distribution systems with comparisons to other methods. Test results help confirm the feasibility of the approach, serving as beneficial references for the improvement of electric power grid operations.

  12. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  13. Optimal Financing Order Decisions of a Supply Chain under the Retailer's Delayed Payment

    Directory of Open Access Journals (Sweden)

    Honglin Yang

    2014-01-01

    Full Text Available In real supply chain, a capital-constrained retailer has two typical payment choices: the up-front payment to receive a high discount price or the delayed payment to reduce capital pressure. We compare with the efficiency of optimal decisions of different participants, that is, supplier, retailer, and bank, under both types of payments based on a game equilibrium analysis. It shows that under the equilibrium, the delayed payment leads to a greater optimal order quantity from the retailer compared to the up-front payment and, thus, improves the whole benefit of the supply chain. The numerical simulation for the random demand following a uniform distribution further verifies our findings. This study provides novel evidence that a dominant supplier who actively offers trade credit helps enhance the whole efficiency of a supply chain.

  14. Testing the Optimality of Consumption Decisions of the Representative Household: Evidence from Brazil

    Directory of Open Access Journals (Sweden)

    Marcos Gesteira Costa

    2015-09-01

    Full Text Available This paper investigates whether there is a fraction of consumers that do not behave as fully forward-looking optimal consumers in the Brazilian economy. The generalized method of moments technique was applied to nonlinear Euler equations of the consumption-based capital assets model contemplating utility functions with time separability and non-separability. The results show that when the household utility function was modeled as constant relative risk aversion, external habits and Kreps–Porteus, estimates of the fraction of rule-of-thumb households was, respectively, 89%, 78% and 22%. According to this, a portion of disposable income goes to households who consume their current incomes in violation of the permanent income hypothesis.

  15. A study on the optimal fuel loading pattern design in pressurized water reactor using the artificial neural network and the fuzzy rule based system

    International Nuclear Information System (INIS)

    Kim, Han Gon; Chang, Soon Heung; Lee, Byung

    2004-01-01

    The Optimal Fuel Shuffling System (OFSS) is developed for optimal design of PWR fuel loading pattern. In this paper, an optimal loading pattern is defined that the local power peaking factor is lower than predetermined value during one cycle and the effective multiplication factor is maximized in order to extract maximum energy. OFSS is a hybrid system that a rule based system, a fuzzy logic, and an artificial neural network are connected each other. The rule based system classifies loading patterns into two classes using several heuristic rules and a fuzzy rule. A fuzzy rule is introduced to achieve more effective and fast searching. Its membership function is automatically updated in accordance with the prediction results. The artificial neural network predicts core parameters for the patterns generated from the rule based system. The back-propagation network is used for fast prediction of core parameters. The artificial neural network and the fuzzy logic can be used as the tool for improvement of existing algorithm's capabilities. OFSS was demonstrated and validated for cycle 1 of Kori unit 1 PWR. (author)

  16. A study on the optimal fuel loading pattern design in pressurized water reactor using the artificial neural network and the fuzzy rule based system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Han Gon; Chang, Soon Heung; Lee, Byung [Department of Nuclear Engineering, Korea Advanced Institute of Science and Technology, Yusong-gu, Taejon (Korea, Republic of)

    2004-07-01

    The Optimal Fuel Shuffling System (OFSS) is developed for optimal design of PWR fuel loading pattern. In this paper, an optimal loading pattern is defined that the local power peaking factor is lower than predetermined value during one cycle and the effective multiplication factor is maximized in order to extract maximum energy. OFSS is a hybrid system that a rule based system, a fuzzy logic, and an artificial neural network are connected each other. The rule based system classifies loading patterns into two classes using several heuristic rules and a fuzzy rule. A fuzzy rule is introduced to achieve more effective and fast searching. Its membership function is automatically updated in accordance with the prediction results. The artificial neural network predicts core parameters for the patterns generated from the rule based system. The back-propagation network is used for fast prediction of core parameters. The artificial neural network and the fuzzy logic can be used as the tool for improvement of existing algorithm's capabilities. OFSS was demonstrated and validated for cycle 1 of Kori unit 1 PWR. (author)

  17. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    Science.gov (United States)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  18. Implementing of the multi-objective particle swarm optimizer and fuzzy decision-maker in exergetic, exergoeconomic and environmental optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza

    2011-01-01

    Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.

  19. Driver's Behavior and Decision-Making Optimization Model in Mixed Traffic Environment

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2015-02-01

    Full Text Available Driving process is an information treating procedure going on unceasingly. It is very important for the research of traffic flow theory, to study on drivers' information processing pattern in mixed traffic environment. In this paper, bicycle is regarded as a kind of information source to vehicle drivers; the “conflict point method” is brought forward to analyze the influence of bicycles on driving behavior. The “conflict” is studied to be translated into a special kind of car-following or lane-changing process. Furthermore, the computer clocked scan step length is dropped to 0.1 s, in order to scan and analyze the dynamic (static information which influences driving behavior in a more exact way. The driver's decision-making process is described through information fusion based on duality contrast and fuzzy optimization theory. The model test and verification show that the simulation results with the “conflict point method” and the field data are consistent basically. It is feasible to imitate driving behavior and the driver information fusion process with the proposed methods. Decision-making optimized process can be described more accurately through computer precision clocked scan strategy. The study in this paper can provide the foundation for further research of multiresource information fusion process of driving behavior.

  20. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    Science.gov (United States)

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  1. Optimal Decision Making Framework of an Electric Vehicle Aggregator in Future and Pool markets

    DEFF Research Database (Denmark)

    Rashidizadeh-Kermani, Homa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    An electric vehicle (EV) aggregator, as an agent between power producers and EV owners, participates in the future and pool market to supply EVs’ requirement. Because of uncertain nature of pool prices and EVs’ behavior, this paper proposed a two stage scenario-based model to obtain optimal decis...... electricity markets, a sensitivity analysis over risk factor is performed. The numerical results demonstrate that with the application of the proposed model, the aggregator can supply EVs with lower purchases from markets....... decision making of an EV aggregator. To deal with mentioned uncertainties, the aggregator’s risk aversion is applied using conditional value at risk (CVaR) method in the proposed model. The proposed two stage risk-constrained decision making problem is applied to maximize EV aggregator’s expected profit...... in an uncertain environment. The aggregator can participate in the future and pool market to buy required energy of EVs and offer optimal charge/discharge prices to the EV owners. In this model, in order to assess the effects of EVs owners’ reaction to the aggregator’s offered prices on the purchases from...

  2. Development of a fuzzy optimization model, supporting global warming decision-making

    International Nuclear Information System (INIS)

    Leimbach, M.

    1996-01-01

    An increasing number of models have been developed to support global warming response policies. The model constructors are facing a lot of uncertainties which limit the evidence of these models. The support of climate policy decision-making is only possible in a semi-quantitative way, as presented by a Fuzzy model. The model design is based on an optimization approach, integrated in a bounded risk decision-making framework. Given some regional emission-related and impact-related restrictions, optimal emission paths can be calculated. The focus is not only on carbon dioxide but on other greenhouse gases too. In the paper, the components of the model will be described. Cost coefficients, emission boundaries and impact boundaries are represented as Fuzzy parameters. The Fuzzy model will be transformed into a computational one by using an approach of Rommelfanger. In the second part, some problems of applying the model to computations will be discussed. This includes discussions on the data situation and the presentation, as well as interpretation of results of sensitivity analyses. The advantage of the Fuzzy approach is that the requirements regarding data precision are not so strong. Hence, the effort for data acquisition can be reduced and computations can be started earlier. 9 figs., 3 tabs., 17 refs., 1 appendix

  3. Rules as institutional context for decision making in networks; the approach to postwar housing districts in two cities.

    NARCIS (Netherlands)

    E-H. Klijn (Erik-Hans)

    2001-01-01

    textabstractOne of the issues within the network approach to policy concerns the influence of the network structure on the policy processes that are taking place within it. A central point is the concept of rules. The basic assumption is that actors in networks form communal rules during their

  4. Bees do not use nearest-neighbour rules for optimization of multi-location routes.

    Science.gov (United States)

    Lihoreau, Mathieu; Chittka, Lars; Le Comber, Steven C; Raine, Nigel E

    2012-02-23

    Animals collecting patchily distributed resources are faced with complex multi-location routing problems. Rather than comparing all possible routes, they often find reasonably short solutions by simply moving to the nearest unvisited resources when foraging. Here, we report the travel optimization performance of bumble-bees (Bombus terrestris) foraging in a flight cage containing six artificial flowers arranged such that movements between nearest-neighbour locations would lead to a long suboptimal route. After extensive training (80 foraging bouts and at least 640 flower visits), bees reduced their flight distances and prioritized shortest possible routes, while almost never following nearest-neighbour solutions. We discuss possible strategies used during the establishment of stable multi-location routes (or traplines), and how these could allow bees and other animals to solve complex routing problems through experience, without necessarily requiring a sophisticated cognitive representation of space.

  5. Fiber coupled diode laser beam parameter product calculation and rules for optimized design

    Science.gov (United States)

    Wang, Zuolan; Segref, Armin; Koenning, Tobias; Pandey, Rajiv

    2011-03-01

    The Beam Parameter Product (BPP) of a passive, lossless system is a constant and cannot be improved upon but the beams may be reshaped for enhanced coupling performance. The function of the optical designer of fiber coupled diode lasers is to preserve the brightness of the diode sources while maximizing the coupling efficiency. In coupling diode laser power into fiber output, the symmetrical geometry of the fiber core makes it highly desirable to have symmetrical BPPs at the fiber input surface, but this is not always practical. It is therefore desirable to be able to know the 'diagonal' (fiber) BPP, using the BPPs of the fast and slow axes, before detailed design and simulation processes. A commonly used expression for this purpose, i.e. the square root of the sum of the squares of the BPPs in the fast and slow axes, has been found to consistently under-predict the fiber BPP (i.e. better beam quality is predicted than is actually achievable in practice). In this paper, using a simplified model, we provide the proof of the proper calculation of the diagonal (i.e. the fiber) BPP using BPPs of the fast and slow axes as input. Using the same simplified model, we also offer the proof that the fiber BPP can be shown to have a minimum (optimal) value for given diode BPPs and this optimized condition can be obtained before any detailed design and simulation are carried out. Measured and simulated data confirms satisfactory correlation between the BPPs of the diode and the predicted fiber BPP.

  6. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  7. The Manchester Acute Coronary Syndromes (MACS) decision rule: validation with a new automated assay for heart-type fatty acid binding protein.

    Science.gov (United States)

    Body, Richard; Burrows, Gillian; Carley, Simon; Lewis, Philip S

    2015-10-01

    The Manchester Acute Coronary Syndromes (MACS) decision rule may enable acute coronary syndromes to be immediately 'ruled in' or 'ruled out' in the emergency department. The rule incorporates heart-type fatty acid binding protein (h-FABP) and high sensitivity troponin T levels. The rule was previously validated using a semiautomated h-FABP assay that was not practical for clinical implementation. We aimed to validate the rule with an automated h-FABP assay that could be used clinically. In this prospective diagnostic cohort study we included patients presenting to the emergency department with suspected cardiac chest pain. Serum drawn on arrival was tested for h-FABP using an automated immunoturbidimetric assay (Randox) and high sensitivity troponin T (Roche). The primary outcome, a diagnosis of acute myocardial infarction (AMI), was adjudicated based on 12 h troponin testing. A secondary outcome, major adverse cardiac events (MACE; death, AMI, revascularisation or new coronary stenosis), was determined at 30 days. Of the 456 patients included, 78 (17.1%) had AMI and 97 (21.3%) developed MACE. Using the automated h-FABP assay, the MACS rule had the same C-statistic for MACE as the original rule (0.91; 95% CI 0.88 to 0.92). 18.9% of patients were identified as 'very low risk' and thus eligible for immediate discharge with no missed AMIs and a 2.3% incidence of MACE (n=2, both coronary stenoses). 11.1% of patients were classed as 'high-risk' and had a 92.0% incidence of MACE. Our findings validate the performance of a refined MACS rule incorporating an automated h-FABP assay, facilitating use in clinical settings. The effectiveness of this refined rule should be verified in an interventional trial prior to implementation. UK CRN 8376. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Transforming data into decisions to optimize the recovery of the Saih Rawl Field in Oman

    Energy Technology Data Exchange (ETDEWEB)

    Dozier, G C [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Schlumberger Oilfield Services, Dubai (United Arab Emirates); Giacon, P [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Petroleum Development of Oman (Oman)

    2006-07-01

    The Saih Rawl field of Oman has been producing for more than 5 years from the Barik and Miqrat Formations. Well productivity depends greatly on the effectiveness of hydraulic fracturing and other operating practices. Productivity is further complicated by the changing mechanical and reservoir properties related to depletion and intralayer communication. In this study, a systematic approach was used by a team of operators and service companies to optimize well production within a one-year time period. The approach involved a dynamic integration of historical data and new information technologies and engineering diagnostics to identify the key parameters that influence productivity and to optimize performance according to current analyses. In particular, historical pressure trends by unit were incorporated with theoretical assumptions validated by indirect field evidence. Onsite decision-making resulted in effective placement of fracture treatments. The approach has produced some of the highest producing wells in the field's history. It was concluded that optimization and maximization of well productivity requires multidiscipline inputs that should be managed through structured workflow that includes not only the classical simulation design inputs but entails the entire process from design to execution with particular emphasis on cleanup practices and induced fluid damage. 6 refs., 2 tabs., 25 figs.

  9. Optimal Modeling of Wireless LANs: A Decision-Making Multiobjective Approach

    Directory of Open Access Journals (Sweden)

    Tomás de Jesús Mateo Sanguino

    2018-01-01

    Full Text Available Communication infrastructure planning is a critical design task that typically requires handling complex concepts on networking aimed at optimizing performance and resources, thus demanding high analytical and problem-solving skills to engineers. To reduce this gap, this paper describes an optimization algorithm—based on evolutionary strategy—created as an aid for decision-making prior to the real deployment of wireless LANs. The developed algorithm allows automating the design process, traditionally handmade by network technicians, in order to save time and cost by improving the WLAN arrangement. To this end, we implemented a multiobjective genetic algorithm (MOGA with the purpose of meeting two simultaneous design objectives, namely, to minimize the number of APs while maximizing the coverage signal over a whole planning area. Such approach provides efficient and scalable solutions closer to the best network design, so that we integrated the developed algorithm into an engineering tool with the goal of modelling the behavior of WLANs in ICT infrastructures. Called WiFiSim, it allows the investigation of various complex issues concerning the design of IEEE 802.11-based WLANs, thereby facilitating design of the study and design and optimal deployment of wireless LANs through complete modelling software. As a result, we comparatively evaluated three target applications considering small, medium, and large scenarios with a previous approach developed, a monoobjective genetic algorithm.

  10. Decision-making methodology of optimal shielding materials by using fuzzy linear programming

    International Nuclear Information System (INIS)

    Kanai, Y.; Miura, T.; Hirao, Y.

    2000-01-01

    The main purpose of our studies are to select materials and determine the ratio of constituent materials as the first stage of optimum shielding design to suit the individual requirements of nuclear reactors, reprocessing facilities, casks for shipping spent fuel, etc. The parameters of the shield optimization are cost, space, weight and some shielding properties such as activation rates or individual irradiation and cooling time, and total dose rate for neutrons (including secondary gamma ray) and for primary gamma ray. Using conventional two-valued logic (i.e. crisp) approaches, huge combination calculations are needed to identify suitable materials for optimum shielding design. Also, re-computation is required for minor changes, as the approach does not react sensitively to the computation result. Present approach using a fuzzy linear programming method is much of the decision-making toward the satisfying solution might take place in fuzzy environment. And it can quickly and easily provide a guiding principle of optimal selection of shielding materials under the above-mentioned conditions. The possibility or reducing radiation effects by optimizing the ratio of constituent materials is investigated. (author)

  11. Optimal sum-rule inequalities for spin 1/2 Compton scattering. III

    International Nuclear Information System (INIS)

    Filkov, L.V.

    1980-10-01

    The analyticity (optimal) bounds for proton Compton scattering presented in the preceding paper are herewith considered from the point of view of experimental tests. An essential function occuring in this new dispersion framework is constructed numerically making use of existing cross-section data above the pion photoproduction threshold. To secure a safer construction new measurements in the photon laboratory energy region 150 MeV - 240 MeV and at small momentum transfers are necessary. The bounds on the scattering amplitudes in the low energy region below the pion photoproduction threshold are in general sufficiently restrictive so as to be useful in discriminating among variants of theoretical phenomenological analyses but subsequent extremizations needed in bounding only one combination of the amplitudes (the unpolarized differential cross-section) are weakening much the results. The question of strengthening the bounds by means of the combined use of analyticity and unitarity is discussed within a very crude example which nonetheless illustrates that the inclusion of the pion photoproduction data through more elaborate mathematical procedures would deserve the effort. (author)

  12. Optimal sum rules inequalities for spin 1/2 Compton scattering

    International Nuclear Information System (INIS)

    Guiasu, I.; Radescu, E.E.; Razillier, I.

    1979-08-01

    A formalism appropriate for model independent dispersion theoretic investigations of the (not necessarily forward) Compton scattering off spin 1/2 hadronic targets, which fully exploits the analyticity properties of the amplitudes (to lowest order in electromagnetism) in ν 2 at fixed t(ν=(s-u)/4) s,t,u = Mandelstam variables), is developed. It relies on methods which are specific to boundary value problems for analytic matrix-valued functions. An analytic factorization of the positive definite hermitian matrix associated with the bilinear expression of the unpolarized differential cross section (u.d.c.s.) in terms of the Bardeen-Tung (B.T.) invariant amplitudes is explicitly obtained. For t in a specified portion of the physical region, six new amplitudes describing the process are thereby constructed which have the same good analyticity structure in ν 2 as the (crossing symmetrized) B.T. amplitudes, while their connection with the usual helicity amplitudes is given by a matrix which is unitary on the unitarity cut. A bound on a certain integral over the u.d.c.s. above the first inelastic threshold, established in terms of the target's charge and anomalous magnetic moment, improves a previous weaker result, being now optimal under the information accepted as known. (author)

  13. A decision aid to rule out pneumonia and reduce unnecessary prescriptions of antibiotics in primary care patients with cough and fever

    Directory of Open Access Journals (Sweden)

    Hunziker Roger

    2011-05-01

    Full Text Available Abstract Background Physicians fear missing cases of pneumonia and treat many patients with signs of respiratory infection unnecessarily with antibiotics. This is an avoidable cause for the increasing worldwide problem of antibiotic resistance. We developed a user-friendly decision aid to rule out pneumonia and thus reduce the rate of needless prescriptions of antibiotics. Methods This was a prospective cohort study in which we enrolled patients older than 18 years with a new or worsened cough and fever without serious co-morbidities. Physicians recorded results of a standardized medical history and physical examination. C-reactive protein was measured and chest radiographs were obtained. We used Classification and Regression Trees to derive the decision tool. Results A total of 621 consenting eligible patients were studied, 598 were attending a primary care facility, were 48 years on average and 50% were male. Radiographic signs for pneumonia were present in 127 (20.5% of patients. Antibiotics were prescribed to 234 (48.3% of patients without pneumonia. In patients with C-reactive protein values below 10 μg/ml or patients presenting with C-reactive protein between 11 and 50 μg/ml, but without dyspnoea and daily fever, pneumonia can be ruled out. By applying this rule in clinical practice antibiotic prescription could be reduced by 9.1% (95% confidence interval (CI: 6.4 to 11.8. Conclusions Following validation and confirmation in new patient samples, this tool could help rule out pneumonia and be used to reduce unnecessary antibiotic prescriptions in patients presenting with cough and fever in primary care. The algorithm might be especially useful in those instances where taking a medical history and physical examination alone are inconclusive for ruling out pneumonia

  14. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    Science.gov (United States)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  15. Optimizing patient treatment decisions in an era of rapid technological advances: the case of hepatitis C treatment.

    Science.gov (United States)

    Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D

    2017-03-01

    How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.

  16. Decision models for use with criterion-referenced tests

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1980-01-01

    The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive

  17. Prediction of high-grade vesicoureteral reflux after pediatric urinary tract infection: external validation study of procalcitonin-based decision rule.

    Directory of Open Access Journals (Sweden)

    Sandrine Leroy

    Full Text Available Predicting vesico-ureteral reflux (VUR ≥3 at the time of the first urinary tract infection (UTI would make it possible to restrict cystography to high-risk children. We previously derived the following clinical decision rule for that purpose: cystography should be performed in cases with ureteral dilation and a serum procalcitonin level ≥0.17 ng/mL, or without ureteral dilatation when the serum procalcitonin level ≥0.63 ng/mL. The rule yielded a 86% sensitivity with a 46% specificity. We aimed to test its reproducibility.A secondary analysis of prospective series of children with a first UTI. The rule was applied, and predictive ability was calculated.The study included 413 patients (157 boys, VUR ≥3 in 11% from eight centers in five countries. The rule offered a 46% specificity (95% CI, 41-52, not different from the one in the derivation study. However, the sensitivity significantly decreased to 64% (95%CI, 50-76, leading to a difference of 20% (95%CI, 17-36. In all, 16 (34% patients among the 47 with VUR ≥3 were misdiagnosed by the rule. This lack of reproducibility might result primarily from a difference between derivation and validation populations regarding inflammatory parameters (CRP, PCT; the validation set samples may have been collected earlier than for the derivation one.The rule built to predict VUR ≥3 had a stable specificity (ie. 46%, but a decreased sensitivity (ie. 64% because of the time variability of PCT measurement. Some refinement may be warranted.

  18. Engaging Gatekeepers, Optimizing Decision Making, and Mitigating Bias: Design Specifications for Systemic Diversity Interventions.

    Science.gov (United States)

    Vinkenburg, Claartje J

    2017-06-01

    In this contribution to the Journal of Applied Behavioral Science Special Issue on Understanding Diversity Dynamics in Systems: Social Equality as an Organization Change Issue, I develop and describe design specifications for systemic diversity interventions in upward mobility career systems, aimed at optimizing decision making through mitigating bias by engaging gatekeepers. These interventions address the paradox of meritocracy that underlies the surprising lack of diversity at the top of the career pyramid in these systems. I ground the design specifications in the limited empirical evidence on "what works" in systemic interventions. Specifically, I describe examples from interventions in academic settings, including a bias literacy program, participatory modeling, and participant observation. The design specifications, paired with inspirational examples of successful interventions, should assist diversity officers and consultants in designing and implementing interventions to promote the advancement to and representation of nondominant group members at the top of the organizational hierarchy.

  19. Optimization-based decision support to assist in logistics planning for hospital evacuations.

    Science.gov (United States)

    Glick, Roger; Bish, Douglas R; Agca, Esra

    2013-01-01

    The evacuation of the hospital is a very complex process and evacuation planning is an important part of a hospital's emergency management plan. There are numerous factors that affect the evacuation plan including the nature of threat, availability of resources and staff the characteristics of the evacuee population, and risk to patients and staff. The safety and health of patients is of fundamental importance, but safely moving patients to alternative care facilities while under threat is a very challenging task. This article describes the logistical issues and complexities involved in planning and execution of hospital evacuations. Furthermore, this article provides examples of how optimization-based decision support tools can help evacuation planners to better plan for complex evacuations by providing real-world solutions to various evacuation scenarios.

  20. Management of redundancy in flight control systems using optimal decision theory

    Science.gov (United States)

    1981-01-01

    The problem of using redundancy that exists between dissimilar systems in aircraft flight control is addressed. That is, using the redundancy that exists between a rate gyro and an accelerometer--devices that have dissimilar outputs which are related only through the dynamics of the aircraft motion. Management of this type of redundancy requires advanced logic so that the system can monitor failure status and can reconfigure itself in the event of one or more failures. An optimal decision theory was tutorially developed for the management of sensor redundancy and the theory is applied to two aircraft examples. The first example is the space shuttle and the second is a highly maneuvering high performance aircraft--the F8-C. The examples illustrate the redundancy management design process and the performance of the algorithms presented in failure detection and control law reconfiguration.

  1. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  2. Safety Lead Optimization and Candidate Identification: Integrating New Technologies into Decision-Making.

    Science.gov (United States)

    Dambach, Donna M; Misner, Dinah; Brock, Mathew; Fullerton, Aaron; Proctor, William; Maher, Jonathan; Lee, Dong; Ford, Kevin; Diaz, Dolores

    2016-04-18

    Discovery toxicology focuses on the identification of the most promising drug candidates through the development and implementation of lead optimization strategies and hypothesis-driven investigation of issues that enable rational and informed decision-making. The major goals are to [a] identify and progress the drug candidate with the best overall drug safety profile for a therapeutic area, [b] remove the most toxic drugs from the portfolio prior to entry into humans to reduce clinical attrition due to toxicity, and [c] establish a well-characterized hazard and translational risk profile to enable clinical trial designs. This is accomplished through a framework that balances the multiple considerations to identify a drug candidate with the overall best drug characteristics and provides a cogent understanding of mechanisms of toxicity. The framework components include establishing a target candidate profile for each program that defines the qualities of a successful candidate based on the intended therapeutic area, including the risk tolerance for liabilities; evaluating potential liabilities that may result from engaging the therapeutic target (pharmacology-mediated or on-target) and that are chemical structure-mediated (off-target); and characterizing identified liabilities. Lead optimization and investigation relies upon the integrated use of a variety of technologies and models (in silico, in vitro, and in vivo) that have achieved a sufficient level of qualification or validation to provide confidence in their use. We describe the strategic applications of various nonclinical models (established and new) for a holistic and integrated risk assessment that is used for rational decision-making. While this review focuses on strategies for small molecules, the overall concepts, approaches, and technologies are generally applicable to biotherapeutics.

  3. Review of tri-generation technologies: Design evaluation, optimization, decision-making, and selection approach

    International Nuclear Information System (INIS)

    Al Moussawi, Houssein; Fardoun, Farouk; Louahlia-Gualous, Hasna

    2016-01-01

    Highlights: • Trigeneration technologies classified and reviewed according to prime movers. • Relevant heat recovery equipment discussed with thermal energy storage. • Trigeneration evaluated based on energy, exergy, economy, environment criteria. • Design, optimization, and decision-making methods classified and presented. • System selection suggested according to user preferences. - Abstract: Electricity, heating, and cooling are the three main components constituting the tripod of energy consumption in residential, commercial, and public buildings all around the world. Their separate generation causes higher fuel consumption, at a time where energy demands and fuel costs are continuously rising. Combined cooling, heating, and power (CCHP) or trigeneration could be a solution for such challenge yielding an efficient, reliable, flexible, competitive, and less pollutant alternative. A variety of trigeneration technologies are available and their proper choice is influenced by the employed energy system conditions and preferences. In this paper, different types of trigeneration systems are classified according to the prime mover, size and energy sequence usage. A leveled selection procedure is subsequently listed in the consecutive sections. The first level contains the applied prime mover technologies which are considered to be the heart of any CCHP system. The second level comprises the heat recovery equipment (heating and cooling) of which suitable selection should be compatible with the used prime mover. The third level includes the thermal energy storage system and heat transfer fluid to be employed. For each section of the paper, a survey of conducted studies with CHP/CCHP implementation is presented. A comprehensive table of evaluation criteria for such systems based on energy, exergy, economy, and environment measures is performed, along with a survey of the methods used in their design, optimization, and decision-making. Moreover, a classification

  4. Optimal site selection for sitting a solar park using multi-criteria decision analysis and geographical information systems

    Science.gov (United States)

    Georgiou, Andreas; Skarlatos, Dimitrios

    2016-07-01

    Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study

  5. Risk based economic optimization of investment decisions of regulated power distribution system operators; Risikobasierte wirtschaftliche Optimierung von Investitionsentscheidungen regulierter Stromnetzbetreiber

    Energy Technology Data Exchange (ETDEWEB)

    John, Oliver

    2012-07-01

    The author of the contribution under consideration reports on risk-based economic optimization of investment decisions of regulated power distribution system operators. The focus is the economically rational decision behavior of operators under certain regulatory requirements. Investments in power distribution systems form the items subject to decisions. Starting from a description of theoretical and practical regulatory approaches, their financial implications are quantified at first. On this basis, optimization strategies are derived with respect to the investment behavior. For this purpose, an optimization algorithm is developed and applied to exemplary companies. Finally, effects of uncertainties in regulatory systems are investigated. In this context, Monte Carlo simulations are used in conjunction with real options analysis.

  6. Barriers and facilitators to the dissemination of DECISION+, a continuing medical education program for optimizing decisions about antibiotics for acute respiratory infections in primary care: A study protocol

    Directory of Open Access Journals (Sweden)

    Gagnon Marie-Pierre

    2011-01-01

    decision making regarding the use of antibiotics in acute respiratory infections, to facilitate its dissemination in primary care on a large scale. Our results should help continuing medical educators develop a continuing medical education program in shared decision making for other clinically relevant topics. This will help optimize clinical decisions in primary care.

  7. Development & optimization of a rule-based energy management strategy for fuel economy improvement in hybrid electric vehicles

    Science.gov (United States)

    Asfoor, Mostafa

    The gradual decline of oil reserves and the increasing demand for energy over the past decades has resulted in automotive manufacturers seeking alternative solutions to reduce the dependency on fossil-based fuels for transportation. A viable technology that enables significant improvements in the overall energy conversion efficiencies is the hybridization of conventional vehicle drive systems. This dissertation builds on prior hybrid powertrain development at the University of Idaho. Advanced vehicle models of a passenger car with a conventional powertrain and three different hybrid powertrain layouts were created using GT-Suite. These different powertrain models were validated against a variety of standard driving cycles. The overall fuel economy, energy consumption, and losses were monitored, and a comprehensive energy analysis was performed to compare energy sources and sinks. The GT-Suite model was then used to predict the formula hybrid SAE vehicle performance. Inputs to this model were a numerically predicted engine performance map, an electric motor torque curve, vehicle geometry, and road load parameters derived from a roll-down test. In this case study, the vehicle had a supervisory controller that followed a rule-based energy management strategy to insure a proper power split during hybrid mode operation. The supervisory controller parameters were optimized using discrete grid optimization method that minimized the total amount of fuel consumed during a specific urban driving cycle with an average speed of approximately 30 [mph]. More than a 15% increase in fuel economy was achieved by adding supervisory control and managing power split. The vehicle configuration without the supervisory controller displayed a fuel economy of 25 [mpg]. With the supervisory controller this rose to 29 [mpg]. Wider applications of this research include hybrid vehicle controller designs that can extend the range and survivability of military combat platforms. Furthermore, the

  8. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  9. Optimal Channel Selection Based on Online Decision and Offline Learning in Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mu Qiao

    2017-01-01

    Full Text Available We propose a channel selection strategy with hybrid architecture, which combines the centralized method and the distributed method to alleviate the overhead of access point and at the same time provide more flexibility in network deployment. By this architecture, we make use of game theory and reinforcement learning to fulfill the optimal channel selection under different communication scenarios. Particularly, when the network can satisfy the requirements of energy and computational costs, the online decision algorithm based on noncooperative game can help each individual sensor node immediately select the optimal channel. Alternatively, when the network cannot satisfy the requirements of energy and computational costs, the offline learning algorithm based on reinforcement learning can help each individual sensor node to learn from its experience and iteratively adjust its behavior toward the expected target. Extensive simulation results validate the effectiveness of our proposal and also prove that higher system throughput can be achieved by our channel selection strategy over the conventional off-policy channel selection approaches.

  10. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  11. A web-based Decision Support System for the optimal management of construction and demolition waste.

    Science.gov (United States)

    Banias, G; Achillas, Ch; Vlachokostas, Ch; Moussiopoulos, N; Papaioannou, I

    2011-12-01

    Wastes from construction activities constitute nowadays the largest by quantity fraction of solid wastes in urban areas. In addition, it is widely accepted that the particular waste stream contains hazardous materials, such as insulating materials, plastic frames of doors, windows, etc. Their uncontrolled disposal result to long-term pollution costs, resource overuse and wasted energy. Within the framework of the DEWAM project, a web-based Decision Support System (DSS) application - namely DeconRCM - has been developed, aiming towards the identification of the optimal construction and demolition waste (CDW) management strategy that minimises end-of-life costs and maximises the recovery of salvaged building materials. This paper addresses both technical and functional structure of the developed web-based application. The web-based DSS provides an accurate estimation of the generated CDW quantities of twenty-one different waste streams (e.g. concrete, bricks, glass, etc.) for four different types of buildings (residential, office, commercial and industrial). With the use of mathematical programming, the DeconRCM provides also the user with the optimal end-of-life management alternative, taking into consideration both economic and environmental criteria. The DSS's capabilities are illustrated through a real world case study of a typical five floor apartment building in Thessaloniki, Greece. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. A markov decision process model for the optimal dispatch of military medical evacuation assets.

    Science.gov (United States)

    Keneally, Sean K; Robbins, Matthew J; Lunday, Brian J

    2016-06-01

    We develop a Markov decision process (MDP) model to examine aerial military medical evacuation (MEDEVAC) dispatch policies in a combat environment. The problem of deciding which aeromedical asset to dispatch to each service request is complicated by the threat conditions at the service locations and the priority class of each casualty event. We assume requests for MEDEVAC support arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The United States military uses a 9-line MEDEVAC request system to classify casualties as being one of three priority levels: urgent, priority, and routine. Multiple casualties can be present at a single casualty event, with the highest priority casualty determining the priority level for the casualty event. Moreover, an armed escort may be required depending on the threat level indicated by the 9-line MEDEVAC request. The proposed MDP model indicates how to optimally dispatch MEDEVAC helicopters to casualty events in order to maximize steady-state system utility. The utility gained from servicing a specific request depends on the number of casualties, the priority class for each of the casualties, and the locations of both the servicing ambulatory helicopter and casualty event. Instances of the dispatching problem are solved using a relative value iteration dynamic programming algorithm. Computational examples are used to investigate optimal dispatch policies under different threat situations and armed escort delays; the examples are based on combat scenarios in which United States Army MEDEVAC units support ground operations in Afghanistan.

  13. Decision Optimization for Power Grid Operating Conditions with High- and Low-Voltage Parallel Loops

    Directory of Open Access Journals (Sweden)

    Dong Yang

    2017-05-01

    Full Text Available With the development of higher-voltage power grids, the high- and low-voltage parallel loops are emerging, which lead to energy losses and even threaten the security and stability of power systems. The multi-infeed high-voltage direct current (HVDC configurations widely appearing in AC/DC interconnected power systems make this situation even worse. Aimed at energy saving and system security, a decision optimization method for power grid operating conditions with high- and low-voltage parallel loops is proposed in this paper. Firstly, considering hub substation distribution and power grid structure, parallel loop opening schemes are generated with GN (Girvan-Newman algorithms. Then, candidate opening schemes are preliminarily selected from all these generated schemes based on a filtering index. Finally, with the influence on power system security, stability and operation economy in consideration, an evaluation model for candidate opening schemes is founded based on analytic hierarchy process (AHP. And a fuzzy evaluation algorithm is used to find the optimal scheme. Simulation results of a New England 39-bus system and an actual power system validate the effectiveness and superiority of this proposed method.

  14. Application of Bayesian statistical decision theory for a maintenance optimization problem

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1997-01-01

    Reliability-centered maintenance (RCM) is a rational approach that can be used to identify the equipment of facilities that may turn out to be critical with respect to safety, to availability, or to maintenance costs. Is is dor these critical pieces of equipment alone that a corrective (one waits for a failure) or preventive (the type and frequency are specified) maintenance policy is established. But this approach has limitations: - when there is little operating feedback and it concerns rare events affecting a piece of equipment judged critical on a priori grounds (how is it possible, in this case, to decide whether or not it is critical, since there is conflict between the gravity of the potential failure and its frequency?); - when the aim is propose an optimal maintenance frequency for a critical piece of equipment - changing the maintenance frequency hitherto applied may cause a significant drift in the observed reliability of the equipment, an aspect not generally taken into account in the RCM approach. In these two situations, expert judgments can be combined with the available operating feedback (Bayesian approach) and the combination of risk of failure and economic consequences taken into account (statistical decision theory) to achieve a true optimization of maintenance policy choices. This paper presents an application on the maintenance of diesel generator component

  15. Optimal management of adults with pharyngitis – a multi-criteria decision analysis

    Directory of Open Access Journals (Sweden)

    Dolan James G

    2006-03-01

    Full Text Available Abstract Background Current practice guidelines offer different management recommendations for adults presenting with a sore throat. The key issue is the extent to which the clinical likelihood of a Group A streptococcal infection should affect patient management decisions. To help resolve this issue, we conducted a multi-criteria decision analysis using the Analytic Hierarchy Process. Methods We defined optimal patient management using four criteria: 1 reduce symptom duration; 2 prevent infectious complications, local and systemic; 3 minimize antibiotic side effects, minor and anaphylaxis; and 4 achieve prudent use of antibiotics, avoiding both over-use and under-use. In our baseline analysis we assumed that all criteria and sub-criteria were equally important except minimizing anaphylactic side effects, which was judged very strongly more important than minimizing minor side effects. Management strategies included: a No test, No treatment; b Perform a rapid strep test and treat if positive; c Perform a throat culture and treat if positive; d Perform a rapid strep test and treat if positive; if negative obtain a throat culture and treat if positive; and e treat without further tests. We defined four scenarios based on the likelihood of group A streptococcal infection using the Centor score, a well-validated clinical index. Published data were used to estimate the likelihoods of clinical outcomes and the test operating characteristics of the rapid strep test and throat culture for identifying group A streptococcal infections. Results Using the baseline assumptions, no testing and no treatment is preferred for patients with Centor scores of 1; two strategies – culture and treat if positive and rapid strep with culture of negative results – are equally preferable for patients with Centor scores of 2; and rapid strep with culture of negative results is the best management strategy for patients with Centor scores 3 or 4. These results are

  16. Modeling Ignition of a Heptane Isomer: Improved Thermodynamics, Reaction Pathways, Kinetic, and Rate Rule Optimizations for 2-Methylhexane

    KAUST Repository

    Mohamed, Samah; Cai, Liming; Khaled, Fathi; Banyon, Colin; Wang, Zhandong; Rachidi, Mariam El; Pitsch, Heinz; Curran, Henry J.; Farooq, Aamir; Sarathy, Mani

    2016-01-01

    Accurate chemical kinetic combustion models of lightly branched alkanes (e.g., 2-methylalkanes) are important to investigate the combustion behavior of real fuels. Improving the fidelity of existing kinetic models is a necessity, as new experiments and advanced theories show inaccuracies in certain portions of the models. This study focuses on updating thermodynamic data and the kinetic reaction mechanism for a gasoline surrogate component, 2-methylhexane, based on recently published thermodynamic group values and rate rules derived from quantum calculations and experiments. Alternative pathways for the isomerization of peroxy-alkylhydroperoxide (OOQOOH) radicals are also investigated. The effects of these updates are compared against new high-pressure shock tube and rapid compression machine ignition delay measurements. It is shown that rate constant modifications are required to improve agreement between kinetic modeling simulations and experimental data. We further demonstrate the ability to optimize the kinetic model using both manual and automated techniques for rate parameter tunings to improve agreement with the measured ignition delay time data. Finally, additional low temperature chain branching reaction pathways are shown to improve the model’s performance. The present approach to model development provides better performance across extended operating conditions while also strengthening the fundamental basis of the model.

  17. Modeling Ignition of a Heptane Isomer: Improved Thermodynamics, Reaction Pathways, Kinetic, and Rate Rule Optimizations for 2-Methylhexane

    KAUST Repository

    Mohamed, Samah

    2016-03-21

    Accurate chemical kinetic combustion models of lightly branched alkanes (e.g., 2-methylalkanes) are important to investigate the combustion behavior of real fuels. Improving the fidelity of existing kinetic models is a necessity, as new experiments and advanced theories show inaccuracies in certain portions of the models. This study focuses on updating thermodynamic data and the kinetic reaction mechanism for a gasoline surrogate component, 2-methylhexane, based on recently published thermodynamic group values and rate rules derived from quantum calculations and experiments. Alternative pathways for the isomerization of peroxy-alkylhydroperoxide (OOQOOH) radicals are also investigated. The effects of these updates are compared against new high-pressure shock tube and rapid compression machine ignition delay measurements. It is shown that rate constant modifications are required to improve agreement between kinetic modeling simulations and experimental data. We further demonstrate the ability to optimize the kinetic model using both manual and automated techniques for rate parameter tunings to improve agreement with the measured ignition delay time data. Finally, additional low temperature chain branching reaction pathways are shown to improve the model’s performance. The present approach to model development provides better performance across extended operating conditions while also strengthening the fundamental basis of the model.

  18. Judicious management of uncertain risks : II. Simple rules and more intricate models for precautionary decision-making

    NARCIS (Netherlands)

    Vlek, Charles

    2010-01-01

    Rational decision theory could be more fully exploited for the prudent management of uncertain-risk situations. After an integrative circumscription of the precautionary principle (PP), 10 key issues are discussed covering assessment, decision and control. In view of this, a variety of

  19. Can decision rules simulate carbon allocation for years with contrasting and extreme weather conditions? A case study for three temperate beech forests

    DEFF Research Database (Denmark)

    Campioli, Matteo; Verbeeck, Hans; Van den Bossche, Joris

    2013-01-01

    The allocation of carbohydrates to different tree processes and organs is crucial to understand the overall carbon (C) cycling rate in forest ecosystems. Decision rules (DR) (e.g. functional balances and source-sink relationships) are widely used to model C allocation in forests. However, standard...... allocation and wood growth at beech stands with contrasting climate and standing stock. However, the allocation model required high quality GPP input and errors (even modest) in GPP resulted in large errors in the growth of the tree organs lowest in the modelled sink hierarchy (woody organs). The ability...

  20. The participative construction of the criminal decision in democratic states ruled by the law: the guaranty of participation of the parties, through confrontation, in the composition of a fair and legitimate decision

    Directory of Open Access Journals (Sweden)

    Flávio da Silva Andrade

    2017-10-01

    Full Text Available This article concerns a topic that is not new, but it remains current:  the participatory construction of the criminal decision in a democratic State ruled by law. Starting from the concepts of Rule of Law, of Guarantism and of Democracy, it seeks to renew the importance of the equal and dialectical participation of the parties, through the adversarial system, for the composition of a fair and legitimate criminal judicial decision. It is argued, from this perspective, that the parties should take the role of protagonists in the procedural scenario, since the decision should be built in a participatory way, i.e., based on the arguments and evidence presented, thus reducing the gaps that favor judicial discretion and decisionism. It is proposed, therefore, that the solution to the concrete case (acceptance or dismissal of the information or indictment, grant or rejection of a criminal precautionary measure, conviction or acquittal should be elaborated with support on the contribution of the litigants, from the contrast of their arguments and of the evidence produced, in adversarial proceedings, in the regular course of the process.

  1. 20 CFR 655.845 - What rules apply to appeal of the decision of the administrative law judge?

    Science.gov (United States)

    2010-04-01

    ... ADMINISTRATION, DEPARTMENT OF LABOR TEMPORARY EMPLOYMENT OF FOREIGN WORKERS IN THE UNITED STATES Enforcement of H... appeal of the decision of the administrative law judge? (a) The Administrator or any interested party...

  2. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study

  3. The Preliminary Ruling Decision in the Case of Google vs. Louis Vuitton Concerning the AdWord Service and its Impact on the Community Law

    Directory of Open Access Journals (Sweden)

    Tomáš Gongol

    2013-02-01

    Full Text Available The internet user after entering the keywords obtains two kinds of search results – natural and sponsored ones. The following paper deals with the issue of using keywords which correspond to trademarks registered by a third party for advertising purposes through internet search portals such as Google, Yahoo, Bing, Seznam, Centrum etc. (in principle web search portals. The objective of this article is to analyze decided cases dealing with the AdWords service issued by the Court of Justice of the European Union and compare them also with the attitude in similar disputes in the U.S. Within this knowledge it is necessary to determine the impact of these decisions on further national courts decisions of European Union member states. Moreover there is also legal impact on copyright law and responsibility of internet search engines deduced. The method of the analysis of courts decisions is used and the method of legal comparison is applied to different attitudes in similar cases. Where a third party uses a sign which is identical with the trademark in relation to goods or services identical with those for which the mark is registered, the trademark proprietor is allowed to prohibit such use if it is liable to affect one of the functions of the mark (particularly the function of indicating origin. Regarding to the liability of the Internet search engine itself, decisions of the courts in matters of Internet search engines in the European Union vary from state to state. Whereas the German courts tend to currently access the responsibility for the outcome of the search engines more freely, the French courts are often more stringent. Differently, we can say much more liberal, is the access of the U.S. courts to this issue. Preliminary ruling decision in case of Louis Vuitton Malletier SA vs. Google, Inc. and community practice in further cases follow similar (liberal decisions of the courts of the U.S.

  4. Simple Decision-Analytic Functions of the AUC for Ruling Out a Risk Prediction Model and an Added Predictor.

    Science.gov (United States)

    Baker, Stuart G

    2018-02-01

    When using risk prediction models, an important consideration is weighing performance against the cost (monetary and harms) of ascertaining predictors. The minimum test tradeoff (MTT) for ruling out a model is the minimum number of all-predictor ascertainments per correct prediction to yield a positive overall expected utility. The MTT for ruling out an added predictor is the minimum number of added-predictor ascertainments per correct prediction to yield a positive overall expected utility. An approximation to the MTT for ruling out a model is 1/[P (H(AUC model )], where H(AUC) = AUC - {½ (1-AUC)} ½ , AUC is the area under the receiver operating characteristic (ROC) curve, and P is the probability of the predicted event in the target population. An approximation to the MTT for ruling out an added predictor is 1 /[P {(H(AUC Model:2 ) - H(AUC Model:1 )], where Model 2 includes an added predictor relative to Model 1. The latter approximation requires the Tangent Condition that the true positive rate at the point on the ROC curve with a slope of 1 is larger for Model 2 than Model 1. These approximations are suitable for back-of-the-envelope calculations. For example, in a study predicting the risk of invasive breast cancer, Model 2 adds to the predictors in Model 1 a set of 7 single nucleotide polymorphisms (SNPs). Based on the AUCs and the Tangent Condition, an MTT of 7200 was computed, which indicates that 7200 sets of SNPs are needed for every correct prediction of breast cancer to yield a positive overall expected utility. If ascertaining the SNPs costs $500, this MTT suggests that SNP ascertainment is not likely worthwhile for this risk prediction.

  5. A Nonlinear Programming and Artificial Neural Network Approach for Optimizing the Performance of a Job Dispatching Rule in a Wafer Fabrication Factory

    Directory of Open Access Journals (Sweden)

    Toly Chen

    2012-01-01

    Full Text Available A nonlinear programming and artificial neural network approach is presented in this study to optimize the performance of a job dispatching rule in a wafer fabrication factory. The proposed methodology fuses two existing rules and constructs a nonlinear programming model to choose the best values of parameters in the two rules by dynamically maximizing the standard deviation of the slack, which has been shown to benefit scheduling performance by several studies. In addition, a more effective approach is also applied to estimate the remaining cycle time of a job, which is empirically shown to be conducive to the scheduling performance. The efficacy of the proposed methodology was validated with a simulated case; evidence was found to support its effectiveness. We also suggested several directions in which it can be exploited in the future.

  6. Optimal Detection under the Restricted Bayesian Criterion

    Directory of Open Access Journals (Sweden)

    Shujun Liu

    2017-07-01

    Full Text Available This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

  7. A multi attribute decision making method for selection of optimal assembly line

    Directory of Open Access Journals (Sweden)

    B. Vijaya Ramnath

    2011-01-01

    Full Text Available With globalization, sweeping technological development, and increasing competition, customers are placing greater demands on manufacturers to increase quality, flexibility, on time delivery of product and less cost. Therefore, manufacturers must develop and maintain a high degree of coherence among competitive priorities, order winning criteria and improvement activities. Thus, the production managers are making an attempt to transform their organization by adopting familiar and beneficial management philosophies like cellular manufacturing (CM, lean manufacturing (LM, green manufacturing (GM, total quality management (TQM, agile manufacturing (AM, and just in time manufacturing (JIT. The main objective of this paper is to propose an optimal assembly method for an engine manufacturer’s assembly line in India. Currently, the Indian manufacturer is following traditional assembly method where the raw materials for assembly are kept along the sideways of conveyor line. It consumes more floor space, more work in process inventory, more operator's walking time and more operator's walking distance per day. In order to reduce the above mentioned wastes, lean kitting assembly is suggested by some managers. Another group of managers suggest JIT assembly as it consumes very less inventory cost compared to other types of assembly processes. Hence, a Multi-attribute decision making model namely analytical hierarchy process (AHP is applied to analyse the alternative assembly methods based on various important factors.

  8. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  9. Research on the robust optimization of the enterprise's decision on the investment to the collaborative innovation: Under the risk constraints

    International Nuclear Information System (INIS)

    Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei

    2016-01-01

    Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.

  10. Decision analysis to define the optimal management of athletes with anomalous aortic origin of a coronary artery.

    Science.gov (United States)

    Mery, Carlos M; Lopez, Keila N; Molossi, Silvana; Sexson-Tejtel, S Kristen; Krishnamurthy, Rajesh; McKenzie, E Dean; Fraser, Charles D; Cantor, Scott B

    2016-11-01

    The goal of this study was to use decision analysis to evaluate the impact of varying uncertainties on the outcomes of patients with anomalous aortic origin of a coronary artery. Two separate decision analysis models were created: one for anomalous left coronary artery (ALCA) and one for anomalous right coronary artery (ARCA). Three strategies were compared: observation, exercise restriction, and surgery. Probabilities and health utilities were estimated on the basis of existing literature. Deterministic and probabilistic sensitivity analyses were performed. Surgery was the optimal management strategy for patients management in anomalous aortic origin of a coronary artery depends on multiple factors, including individual patient characteristics. Decision analysis provides a tool to understand how these characteristics affect the outcomes with each management strategy and thus may aid in the decision making process for a particular patient. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  11. A Rational Decision Maker with Ordinal Utility under Uncertainty: Optimism and Pessimism

    OpenAIRE

    Han, Ji

    2009-01-01

    In game theory and artificial intelligence, decision making models often involve maximizing expected utility, which does not respect ordinal invariance. In this paper, the author discusses the possibility of preserving ordinal invariance and still making a rational decision under uncertainty.

  12. Optimization of multi-reservoir operation with a new hedging rule: application of fuzzy set theory and NSGA-II

    Science.gov (United States)

    Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad

    2017-10-01

    The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.

  13. Driver's various information process and multi-ruled decision-making mechanism: a fundamental of intelligent driving shaping model

    Directory of Open Access Journals (Sweden)

    Wuhong Wang

    2011-05-01

    Full Text Available The most difficult but important problem in advance driver assistance system development is how to measure and model the behavioral response of drivers with focusing on the cognition process. This paper describes driver's deceleration and acceleration behavior based on driving situation awareness in the car-following process, and then presents several driving models for analysis of driver's safety approaching behavior in traffic operation. The emphasis of our work is placed on the research of driver's various information process and multi-ruled decisionmaking mechanism by considering the complicated control process of driving; the results will be able to provide a theoretical basis for intelligent driving shaping model.

  14. Building uncertainty into cost-effectiveness rankings: portfolio risk-return tradeoffs and implications for decision rules.

    Science.gov (United States)

    O'Brien, B J; Sculpher, M J

    2000-05-01

    Current principles of cost-effectiveness analysis emphasize the rank ordering of programs by expected economic return (eg, quality-adjusted life-years gained per dollar expended). This criterion ignores the variance associated with the cost-effectiveness of a program, yet variance is a common measure of risk when financial investment options are appraised. Variation in health care program return is likely to be a criterion of program selection for health care managers with fixed budgets and outcome performance targets. Characterizing health care resource allocation as a risky investment problem, we show how concepts of portfolio analysis from financial economics can be adopted as a conceptual framework for presenting cost-effectiveness data from multiple programs as mean-variance data. Two specific propositions emerge: (1) the current convention of ranking programs by expected return is a special case of the portfolio selection problem in which the decision maker is assumed to be indifferent to risk, and (2) for risk-averse decision makers, the degree of joint risk or covariation in cost-effectiveness between programs will create incentives to diversify an investment portfolio. The conventional normative assumption of risk neutrality for social-level public investment decisions does not apply to a large number of health care resource allocation decisions in which health care managers seek to maximize returns subject to budget constraints and performance targets. Portfolio theory offers a useful framework for studying mean-variance tradeoffs in cost-effectiveness and offers some positive predictions (and explanations) of actual decision making in the health care sector.

  15. A Joint Optimal Decision on Shipment Size and Carbon Reduction under Direct Shipment and Peddling Distribution Strategies

    Directory of Open Access Journals (Sweden)

    Daiki Min

    2017-11-01

    Full Text Available Recently, much research has focused on lowering carbon emissions in logistics. This paper attempts to contribute to the literature on the joint shipment size and carbon reduction decisions by developing novel models for distribution systems under direct shipment and peddling distribution strategies. Unlike the literature that has simply investigated the effects of carbon costs on operational decisions, we address how to reduce carbon emissions and logistics costs by adjusting shipment size and making an optimal decision on carbon reduction investment. An optimal decision is made by analyzing the distribution cost including not only logistics and carbon trading costs but also the cost for adjusting carbon emission factors. No research has explicitly considered the two sources of carbon emissions, but we develop a model covering the difference in managing carbon emissions from transportation and storage. Structural analysis guides how to determine an optimal shipment size and emission factors in a closed form. Moreover, we analytically prove the possibility of reducing the distribution cost and carbon emissions at the same time. Numerical analysis follows validation of the results and demonstrates some interesting findings on carbon and distribution cost reduction.

  16. A Fuzzy Max–Min Decision Bi-Level Fuzzy Programming Model for Water Resources Optimization Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Chongfeng Ren

    2018-04-01

    Full Text Available Water competing conflict among water competing sectors from different levels should be taken under consideration during the optimization allocation of water resources. Furthermore, uncertainties are inevitable in the optimization allocation of water resources. In order to deal with the above problems, this study developed a fuzzy max–min decision bi-level fuzzy programming model. The developed model was then applied to a case study in Wuwei, Gansu Province, China. In this study, the net benefit and yield were regarded as the upper-level and lower-level objectives, respectively. Optimal water resource plans were obtained under different possibility levels of fuzzy parameters, which could deal with water competing conflict between the upper level and the lower level effectively. The obtained results are expected to make great contribution in helping local decision-makers to make decisions on dealing with the water competing conflict between the upper and lower level and the optimal use of water resources under uncertainty.

  17. Using a decision support system to optimize production of agricultural crop residue Biofeedstock

    International Nuclear Information System (INIS)

    Hoskinson, Reed L.; Rope, Ronald C.; Fink, Raymond K.

    2007-01-01

    For several years the Idaho National Laboratory (INL) has been developing a Decision Support System for Agriculture (DSS4Ag) which determines the economically optimum recipe of various fertilizers to apply at each site in a field to produce a crop, based on the existing soil fertility at each site, as well as historic production information and current prices of fertilizers and the forecast market price of the crop at harvest. In support of the growing interest in agricultural crop residues as a bioenergy feedstock, we have extended the capability of the DSS4Ag to develop a variable-rate fertilizer recipe for the simultaneous economically optimum production of both grain and straw. In this paper we report the results of 2 yr of field research testing and enhancing the DSS4Ag's ability to economically optimize the fertilization for the simultaneous production of both grain and its straw, where the straw is an agricultural crop residue that can be used as a biofeedstock. For both years, the DSS4Ag reduced the cost and amount of fertilizers used and increased grower profit, while reducing the biomass produced. The DSS4Ag results show that when a biorefinery infrastructure is in place and growers have a strong market for their straw it is not economically advantageous to increase fertilization in order to try to produce more straw. This suggests that other solutions, such as single-pass selective harvest, must be implemented to meet national goals for the amount of biomass that will be available for collection and use for bioenergy. (author)

  18. RaCon - decision maker's support for RAdiation CONsequences prediction and for crisis management optimization

    International Nuclear Information System (INIS)

    Svanda, J.; Tschiesche, J.; Fiser, V.

    2003-01-01

    following urgent countermeasures are displayed in table for every effected settlement: sheltering; issue of potassium iodine tablets; evacuation. The most important results - list of effected settlement presented in a table form can be ordered by: number of inhabitants for which the urgent countermeasures should be done; values of received effective doses or thyroid equivalent doses; distance from nuclear facility on which the accident has happened. Such clear arrangement of the resulted data can help in fast decision-making. Averted doses are calculated for every applied countermeasure and their values serve as supported decision-making results based on recommendation for application of countermeasures given by Czech regulatory recommendations. Evaluation of proposed countermeasures and their new revised proposal can be done based on the obtained results from the previous calculation run and new recalculation of averted doses can be performed. Simple choose of the route for mobile monitoring and emergency teams on map presentation of effected area gives a tool for fast evaluation of dose rates in specified area and radiation doses during actions, which can be provided in different types of vehicles. Open database of shielding coefficients for different types of vehicles gives possibilities to enlarge list of this vehicles and list of different types of protective masks. Automatic data acquisition is allowed - where available - to fasten the inquiry process. Optimization of intervention and countermeasures is in compliance with national regulations issued by 'Regulatory Authority'. (author)

  19. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar

    2016-06-01

    Full Text Available Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is selected using Fuzzy, TOPSIS, LINMAP and Shannon’s entropy decision making methods. Triple objective evolutionary approach applied to the proposed model gives power output, thermal efficiency, ecological function as (53.89 kW, 0.1611, −142 kW which are 29.78%, 25.86% and 21.13% lower in comparison with reversible system. Furthermore, the present study reflects the effect of various heat capacitance rates and component efficiencies on triple objectives in graphical custom. Finally, with the aim of error investigation, average and maximum errors of obtained results are computed.

  20. A prediction rule for the development of delirium among patients in medical wards: Chi-Square Automatic Interaction Detector (CHAID) decision tree analysis model.

    Science.gov (United States)

    Kobayashi, Daiki; Takahashi, Osamu; Arioka, Hiroko; Koga, Shinichiro; Fukui, Tsuguya

    2013-10-01

    To predict development of delirium among patients in medical wards by a Chi-Square Automatic Interaction Detector (CHAID) decision tree model. This was a retrospective cohort study of all adult patients admitted to medical wards at a large community hospital. The subject patients were randomly assigned to either a derivation or validation group (2:1) by computed random number generation. Baseline data and clinically relevant factors were collected from the electronic chart. Primary outcome was the development of delirium during hospitalization. All potential predictors were included in a forward stepwise logistic regression model. CHAID decision tree analysis was also performed to make another prediction model with the same group of patients. Receiver operating characteristic curves were drawn, and the area under the curves (AUCs) were calculated for both models. In the validation group, these receiver operating characteristic curves and AUCs were calculated based on the rules from derivation. A total of 3,570 patients were admitted: 2,400 patients assigned to the derivation group and 1,170 to the validation group. A total of 91 and 51 patients, respectively, developed delirium. Statistically significant predictors were delirium history, age, underlying malignancy, and activities of daily living impairment in CHAID decision tree model, resulting in six distinctive groups by the level of risk. AUC was 0.82 in derivation and 0.82 in validation with CHAID model and 0.78 in derivation and 0.79 in validation with logistic model. We propose a validated CHAID decision tree prediction model to predict the development of delirium among medical patients. Copyright © 2013 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. A study on the optimal fuel loading pattern design in pressurized water reactors using the artificial neural network and the fuzzy rule based system

    International Nuclear Information System (INIS)

    Kim, Han Gon

    1993-02-01

    In pressurized water reactors, the fuel reloading problem has significant meaning in terms of both safety and economic aspects. Therefore the general problem of incore fuel management for a PWR consists of determining the fuel reloading policy for each cycle that minimize unit energy cost under the constraints imposed on various core parameters, e.g., a local power peaking factor and an assembly burnup. This is equivalent that a cycle length is maximized for a given energy cost under the various constraints. Existing optimization methods do not ensure the global optimum solution because of the essential limitation of their searching algorithms. They only find near optimal solutions. To solve this limitation, a hybrid artificial neural network system is developed for the optimal fuel loading pattern design using a fuzzy rule based system and an artificial neural networks. This system finds the patterns that P max is lower than the predetermined value and K eff is larger than the reference value. The back-propagation networks are developed to predict PWR core parameters. Reference PWR is an 121-assembly typical PWR. The local power peaking factor and the effective multiplication factor at BOC condition are predicted. To obtain target values of these two parameters, the QCC code are used. Using this code, 1000 training patterns are obtained, randomly. Two networks are constructed, one for P max and another for K eff Both of two networks have 21 input layer neurons, 18 output layer neurons, and 120 and 393 hidden layer neurons, respectively. A new learning algorithm is proposed. This is called the advanced adaptive learning algorithm. The weight change step size of this algorithm is optimally varied inversely proportional to the average difference between an actual output value and an ideal target value. This algorithm greatly enhances the convergence speed of a BPN. In case of P max prediction, 98% of the untrained patterns are predicted within 6% error, and in case

  2. End-of-life decisions for extremely low-gestational-age infants: why simple rules for complicated decisions should be avoided.

    Science.gov (United States)

    Dupont-Thibodeau, Amélie; Barrington, Keith J; Farlow, Barbara; Janvier, Annie

    2014-02-01

    Interventions for extremely preterm infants bring up many ethical questions. Guidelines for intervention in the "periviable" period generally divide infants using predefined categories, such as "futile," "beneficial," and "gray zone" based on completed 7-day periods of gestation; however, such definitions often differ among countries. The ethical justification for using gestational age as the determination of the category boundaries is rarely discussed. Rational criteria used to make decisions regarding life-sustaining interventions must incorporate other important prognostic information. Precise guidelines based on imprecise data are not rational. Gestational age-based guidelines include an implicit judgment of what is deemed to be an unacceptably poor chance of "intact" survival but fail to explore the determination of acceptability. Furthermore, unclear definitions of severe disability, the difficulty, or impossibility, of accurately predicting outcome in the prenatal or immediate postnatal period make such simplistic formulae inappropriate. Similarly, if guidelines for intervention for the newborn are based on the "qualitative futility" of survival, it should be explicitly stated and justified according to established ethical guidelines. They should discuss whether newborn infants are morally different to older individuals or explain why thresholds recommended for intervention are different to recommendations for those in older persons. The aim should be to establish individualized goals of care with families while recognizing uncertainty, rather than acting on labels derived from gestational age categories alone. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Determination of optimal pollution levels through multiple-criteria decision making: an application to the Spanish electricity sector

    International Nuclear Information System (INIS)

    Linares, P.

    1999-01-01

    An efficient pollution management requires the harmonisation of often conflicting economic and environmental aspects. A compromise has to be found, in which social welfare is maximised. The determination of this social optimum has been attempted with different tools, of which the most correct according to neo-classical economics may be the one based on the economic valuation of the externalities of pollution. However, this approach is still controversial, and few decision makers trust the results obtained enough to apply them. But a very powerful alternative exists, which avoids the problem of monetizing physical impacts. Multiple-criteria decision making provides methodologies for dealing with impacts in different units, and for incorporating the preferences of decision makers or society as a whole, thus allowing for the determination of social optima under heterogeneous criteria, which is usually the case of pollution management decisions. In this paper, a compromise programming model is presented for the determination of the optimal pollution levels for the electricity industry in Spain for carbon dioxide, sulphur dioxide, nitrous oxides, and radioactive waste. The preferences of several sectors of society are incorporated explicitly into the model, so that the solution obtained represents the optimal pollution level from a social point of view. Results show that cost minimisation is still the main objective for society, but the simultaneous consideration of the rest of the criteria achieves large pollution reductions at a low cost increment. (Author)

  4. Using multi-criteria decision making for selection of the optimal strategy for municipal solid waste management.

    Science.gov (United States)

    Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica

    2016-09-01

    Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.

  5. Optimization as investment decision support in a Swedish medium-sized iron foundry - A move beyond traditional energy auditing

    International Nuclear Information System (INIS)

    Thollander, Patrik; Mardan, Nawzad; Karlsson, Magnus

    2009-01-01

    Due to increased globalisation, industries are facing greater competition that is pressing companies into decreasing their expenses in order to increase their profits. As regards Swedish industry, it has been faced with substantial increases in energy prices in recent years. Barriers to energy efficiency such as imperfect information inhibit investments in energy efficiency measures, energy audits being one means of reducing barriers and overcoming imperfect information. However, an evaluation of such energy audits in Sweden reveals that it is chiefly low-cost measures that are undertaken as a result of an audit. Moreover, these audits often tend to focus on support processes such as ventilation, lighting, air compressors etc., while measures impacting production processes are often not as extensively covered, which underlines the need for further support in addition to energy audits. Decision support is practised in a variety of different disciplines such as optimization and simulation and the aim of this paper is to explore whether investment decision support practices may be used successfully towards small and medium-sized manufacturers in Sweden when complex production-related investment decisions are taken. The optimization results from the different cases, involving a foundry's investment in a new melting unit, indicate that with no electricity price fluctuations over the day, the investment seems sound as it lowers the overall energy costs. However, with fluctuating electricity prices, there are no large differences in energy costs between the option of retaining the existing five melting furnaces at the foundry and investing in a twin furnace and removing the holding furnaces - which was the initial investment plan for the foundry in the study. It would not have been possible to achieve this outcome without the use of investment decision support such as MIND. One of the main conclusions in this paper is that investment decision support, when strategic

  6. COUNCIL DECISIONS ON THE 5-YEARLY REMUNERATION REVIEW, AJUSTMENTS FOR 2001 AND CHANGES TO THE STAFF RULES AND REGULATIONS

    CERN Document Server

    Human Resources Division

    2001-01-01

    As announced by the Director-General in December last year, Council approved the package of measures concerning the 5-yearly remuneration review, recommended by the TREF Restricted Group, as well as the adjustments for 2001 related to salaries and pensions. These measures, as summarised below, enter into force on 1 January 2001, subject to later implementation of some items. Related changes to the Staff Rules and Regulations will be published as soon as possible in the mean time, changes which were annexed to the Council Resolution can be viewed on the HR Division Web site. 1. Scale of basic salaries (Annex R A 1 of the Staff Regulations) : increased by 4.32% resulting from the 5-yearly Review, and by 0.6% which corresponds to the salary adjustment for 2001. This includes the increases in social insurance contributions indicated below. 2. Scale of stipends of Fellows (Annex R A 2 of the Staff Regulations) : increased by 1.52% resulting from the 5-yearly Review, and by 0.6% which corresponds to the adjustment ...

  7. Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation

    Science.gov (United States)

    Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise

    2011-01-01

    Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.

  8. Optimal Decision-Making in Fuzzy Economic Order Quantity (EOQ Model under Restricted Space: A Non-Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    M. Pattnaik

    2013-08-01

    Full Text Available In this paper the concept of fuzzy Non-Linear Programming Technique is applied to solve an economic order quantity (EOQ model under restricted space. Since various types of uncertainties and imprecision are inherent in real inventory problems they are classically modeled using the approaches from the probability theory. However, there are uncertainties that cannot be appropriately treated by usual probabilistic models. The questions how to define inventory optimization tasks in such environment how to interpret optimal solutions arise. This paper allows the modification of the Single item EOQ model in presence of fuzzy decision making process where demand is related to the unit price and the setup cost varies with the quantity produced/Purchased. This paper considers the modification of objective function and storage area in the presence of imprecisely estimated parameters. The model is developed for the problem by employing different modeling approaches over an infinite planning horizon. It incorporates all concepts of a fuzzy arithmetic approach, the quantity ordered and the demand per unit compares both fuzzy non linear and other models. Investigation of the properties of an optimal solution allows developing an algorithm whose validity is illustrated through an example problem and ugh MATLAB (R2009a version software, the two and three dimensional diagrams are represented to the application. Sensitivity analysis of the optimal solution is also studied with respect to changes in different parameter values and to draw managerial insights of the decision problem.

  9. Decision-Related Activity in Macaque V2 for Fine Disparity Discrimination Is Not Compatible with Optimal Linear Readout.

    Science.gov (United States)

    Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje

    2017-01-18

    Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple

  10. Building of Reusable Reverse Logistics Model and its Optimization Considering the Decision of Backorder or Next Arrival of Goods

    Science.gov (United States)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol

    This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.

  11. Performance improvement of 64-QAM coherent optical communication system by optimizing symbol decision boundary based on support vector machine

    Science.gov (United States)

    Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang

    2018-03-01

    High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.

  12. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    Science.gov (United States)

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  13. Multiple three-decision rules for 2/sup k/-factorial simple effects. Technical report No. 8

    Energy Technology Data Exchange (ETDEWEB)

    Bohrer, R.; Chow, W.; Faith, R.; Joshi, V.M.; Wu, C.F.

    1977-03-22

    One of the problems confronted in the investigation of environmental health and in biological research more generally is the fact that organisms often respond to combinations of treatments in ways which are not predictable from the way they respond to each of these taken separately. One air pollutant may, for example, disable the respiratory tract's self cleaning mechanisms and thereby increase its sensitivity to the effects of other pollutants. It is important that laboratory experiments be able to simultaneously detect the effects of the various possible combinations of treatments in a way that utilizes the data as effectively as possible. This paper deals with the optimal use of data from such experiments when the response variable can be regarded (possibly after undergoing a suitable transformation) as having a normal distribution with mean determined by the combination of treatments it has undergone.

  14. 2D Decision-Making for Multi-Criteria Design Optimization

    National Research Council Canada - National Science Library

    Engau, A; Wiecek, M. M

    2006-01-01

    The high dimensionality encountered in engineering design optimization due to large numbers of performance criteria and specifications leads to cumbersome and sometimes unachievable tradeoff analyses...

  15. 2D Decision-Making for Multi-Criteria Design Optimization

    National Research Council Canada - National Science Library

    Engau, A; Wiecek, M. M

    2006-01-01

    .... To facilitate those analyses and enhance decision-making and design selection, we propose to decompose the original problem by considering only pairs of criteria at a time, thereby making tradeoff...

  16. Nested algorithms for optimal reservoir operation and their embedding in a decision support platform

    NARCIS (Netherlands)

    Delipetrev, B.

    2016-01-01

    Reservoir operation is a multi-objective optimization problem traditionally solved with dynamic programming (DP) and stochastic dynamic programming (SDP) algorithms. The thesis presents novel algorithms for optimal reservoir operation named nested DP (nDP), nested SDP (nSDP), nested reinforcement

  17. Simple decision rules can reduce reinjury risk by 84% after ACL reconstruction: the Delaware-Oslo ACL cohort study.

    Science.gov (United States)

    Grindem, Hege; Snyder-Mackler, Lynn; Moksnes, Håvard; Engebretsen, Lars; Risberg, May Arna

    2016-07-01

    Knee reinjury after ACL reconstruction is common and increases the risk of osteoarthritis. There is sparse evidence to guide return to sport (RTS) decisions in this population. To assess the relationship between knee reinjury after ACL reconstruction and (1) return to level I sports, (2) timing of RTS and (3) knee function prior to return. 106 patients who participated in pivoting sports participated in this prospective 2-year cohort study. Sports participation and knee reinjury were recorded monthly. Knee function was assessed with the Knee Outcome Survey-Activities of Daily Living Scale, global rating scale of function, and quadriceps strength and hop test symmetry. Pass RTS criteria were defined as scores >90 on all tests, failure as failing any. Patients who returned to level I sports had a 4.32 (p=0.048) times higher reinjury rate than those who did not. The reinjury rate was significantly reduced by 51% for each month RTS was delayed until 9 months after surgery, after which no further risk reduction was observed. 38.2% of those who failed RTS criteria suffered reinjuries versus 5.6% of those who passed (HR 0.16, p=0.075). More symmetrical quadriceps strength prior to return significantly reduced the knee reinjury rate. Returning to level I sports after ACL reconstruction leads to a more than 4-fold increase in reinjury rates over 2 years. RTS 9 months or later after surgery and more symmetrical quadriceps strength prior to return substantially reduce the reinjury rate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    Science.gov (United States)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  19. Logistics systems optimization under competition

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Ma, Lijun

    2015-01-01

    environment, decision making for all these critical areas requires more sophisticated mathematical modeling and analysis. Since finding the optimal solution of MCVRP is computationally expensive, they design a few guiding rules, which employ the searching history, to enhance the searching. They conduct...

  20. Life Cycle Assessment and Optimization-Based Decision Analysis of Construction Waste Recycling for a LEED-Certified University Building

    Directory of Open Access Journals (Sweden)

    Murat Kucukvar

    2016-01-01

    Full Text Available The current waste management literature lacks a comprehensive LCA of the recycling of construction materials that considers both process and supply chain-related impacts as a whole. Furthermore, an optimization-based decision support framework has not been also addressed in any work, which provides a quantifiable understanding about the potential savings and implications associated with recycling of construction materials from a life cycle perspective. The aim of this research is to present a multi-criteria optimization model, which is developed to propose economically-sound and environmentally-benign construction waste management strategies for a LEED-certified university building. First, an economic input-output-based hybrid life cycle assessment model is built to quantify the total environmental impacts of various waste management options: recycling, conventional landfilling and incineration. After quantifying the net environmental pressures associated with these waste treatment alternatives, a compromise programming model is utilized to determine the optimal recycling strategy considering environmental and economic impacts, simultaneously. The analysis results show that recycling of ferrous and non-ferrous metals significantly contributed to reductions in the total carbon footprint of waste management. On the other hand, recycling of asphalt and concrete increased the overall carbon footprint due to high fuel consumption and emissions during the crushing process. Based on the multi-criteria optimization results, 100% recycling of ferrous and non-ferrous metals, cardboard, plastic and glass is suggested to maximize the environmental and economic savings, simultaneously. We believe that the results of this research will facilitate better decision making in treating construction and debris waste for LEED-certified green buildings by combining the results of environmental LCA with multi-objective optimization modeling.