Sensitivity Analysis of List Scheduling Heuristics
A.W.J. Kolen; A.H.G. Rinnooy Kan (Alexander); C.P.M. van Hoesel; A.P.M. Wagelmans (Albert)
1994-01-01
textabstractWhen jobs have to be processed on a set of identical parallel machines so as to minimize the makespan of the schedule, list scheduling rules form a popular class of heuristics. The order in which jobs appear on the list is assumed here to be determined by the relative size of their
Sensitivity analysis of a greedy heuristic for knapsack problems
Ghosh, D; Chakravarti, N; Sierksma, G
2006-01-01
In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit
Formative Research on the Heuristic Task Analysis.
Reigeluth, Charles M.; Lee, Ji-Yeon; Peterson, Bruce; Chavez, Michael
Corporate and educational settings increasingly require decision-making, problem-solving and other complex cognitive skills to handle ill-structured, or heuristic, tasks, but the growing need for heuristic task expertise has outpaced the refinement of task analysis methods for heuristic expertise. The Heuristic Task Analysis (HTA) Method was…
Formative Research on the Heuristic Task Analysis Process.
Reigeluth, Charles M.; Lee, Ji-Yeon; Peterson, Bruce; Chavez, Michael
Corporate and educational settings increasingly require decision making, problem solving and other complex cognitive skills to handle ill-structured, or heuristic, tasks, but the growing need for heuristic task expertise has outpaced the refinement of task analysis methods for heuristic expertise. The Heuristic Task Analysis (HTA) Method was…
... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Mood and heuristics: the influence of happy and sad states on sensitivity and bias in stereotyping.
Park, J; Banaji, M R
2000-06-01
The influence of mood states on the propensity to use heuristics as expressed in stereotypes was examined using signal detection statistics. Participants experienced happy, neutral, or sad moods and "remembered" whether names connoting race (African American, European American) belonged to social categories (criminal, politician, basketball player). Positive mood increased reliance on heuristics, indexed by higher false identification of members of stereotyped groups. Positive mood lowered sensitivity (d'), even among relative experts, and shifted bias (beta) or criterion to be more lenient for stereotypical names. In contrast, sad mood did not disrupt sensitivity and, in fact, revealed the use of a stricter criterion compared with baseline mood. Results support theories that characterize happy mood as a mental state that predisposes reliance on heuristics and sad mood as dampening such reliance.
Pieterse, Arwen H; de Vries, Marieke
2013-09-01
Increasingly, patient decision aids and values clarification methods (VCMs) are being developed to support patients in making preference-sensitive health-care decisions. Many VCMs encourage extensive deliberation about options, without solid theoretical or empirical evidence showing that deliberation is advantageous. Research suggests that simple, fast and frugal heuristic decision strategies sometimes result in better judgments and decisions. Durand et al. have developed two fast and frugal heuristic-based VCMs. To critically analyse the suitability of the 'take the best' (TTB) and 'tallying' fast and frugal heuristics in the context of patient decision making. Analysis of the structural similarities between the environments in which the TTB and tallying heuristics have been proven successful and the context of patient decision making and of the potential of these heuristic decision processes to support patient decision making. The specific nature of patient preference-sensitive decision making does not seem to resemble environments in which the TTB and tallying heuristics have proven successful. Encouraging patients to consider less rather than more relevant information potentially even deteriorates their values clarification process. Values clarification methods promoting the use of more intuitive decision strategies may sometimes be more effective. Nevertheless, we strongly recommend further theoretical thinking about the expected value of such heuristics and of other more intuitive decision strategies in this context, as well as empirical assessments of the mechanisms by which inducing such decision strategies may impact the quality and outcome of values clarification. © 2011 John Wiley & Sons Ltd.
Using Heuristics for Supportability Analysis of Adaptive Weapon Systems in Combat
2017-01-01
USING HEURISTICS for Supportability Analysis of Adaptive Weapon Systems in Combat Samuel H. Amber The new U.S. Army vision contends that heuristics ...weapon systems in a constrained data environment involves heuristics . This modeling effort is achieved by modifying a decision matrix to include... heuristics as an alternative field data source. DOI: http://dx.doi.org/10.22594/dau.16-743.24.01 Keywords: innovation, logistics, decision matrix
Maniscalco, Brian; Peters, Megan A K; Lau, Hakwan
2016-04-01
Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.
Goltz, Sonia M.
2013-01-01
In the present analysis the author utilizes the groups as patches model (Goltz, 2009, 2010) to extend fairness heuristic theory (Lind, 2001) in which the concept of fairness is thought to be a heuristic that allows individuals to match responses to consequences they receive from groups. In this model, individuals who are reviewing possible groups…
Using Heuristic Task Analysis to Create Web-Based Instructional Design Theory
Fiester, Herbert R.
2010-01-01
The first purpose of this study was to identify procedural and heuristic knowledge used when creating web-based instruction. The second purpose of this study was to develop suggestions for improving the Heuristic Task Analysis process, a technique for eliciting, analyzing, and representing expertise in cognitively complex tasks. Three expert…
Triple Modular Redundancy verification via heuristic netlist analysis
Directory of Open Access Journals (Sweden)
Giovanni Beltrame
2015-08-01
Full Text Available Triple Modular Redundancy (TMR is a common technique to protect memory elements for digital processing systems subject to radiation effects (such as in space, high-altitude, or near nuclear sources. This paper presents an approach to verify the correct implementation of TMR for the memory elements of a given netlist (i.e., a digital circuit specification using heuristic analysis. The purpose is detecting any issues that might incur during the use of automatic tools for TMR insertion, optimization, place and route, etc. Our analysis does not require a testbench and can perform full, exhaustive coverage within less than an hour even for large designs. This is achieved by applying a divide et impera approach, splitting the circuit into smaller submodules without loss of generality, instead of applying formal verification to the whole netlist at once. The methodology has been applied to a production netlist of the LEON2-FT processor that had reported errors during radiation testing, successfully showing a number of unprotected memory elements, namely 351 flip-flops.
Heuristic Evaluation of E-Learning Courses: A Comparative Analysis of Two E-Learning Heuristic Sets
Zaharias, Panagiotis; Koutsabasis, Panayiotis
2012-01-01
Purpose: The purpose of this paper is to discuss heuristic evaluation as a method for evaluating e-learning courses and applications and more specifically to investigate the applicability and empirical use of two customized e-learning heuristic protocols. Design/methodology/approach: Two representative e-learning heuristic protocols were chosen…
MacGillivray, Brian H
2017-08-01
In many environmental and public health domains, heuristic methods of risk and decision analysis must be relied upon, either because problem structures are ambiguous, reliable data is lacking, or decisions are urgent. This introduces an additional source of uncertainty beyond model and measurement error - uncertainty stemming from relying on inexact inference rules. Here we identify and analyse heuristics used to prioritise risk objects, to discriminate between signal and noise, to weight evidence, to construct models, to extrapolate beyond datasets, and to make policy. Some of these heuristics are based on causal generalisations, yet can misfire when these relationships are presumed rather than tested (e.g. surrogates in clinical trials). Others are conventions designed to confer stability to decision analysis, yet which may introduce serious error when applied ritualistically (e.g. significance testing). Some heuristics can be traced back to formal justifications, but only subject to strong assumptions that are often violated in practical applications. Heuristic decision rules (e.g. feasibility rules) in principle act as surrogates for utility maximisation or distributional concerns, yet in practice may neglect costs and benefits, be based on arbitrary thresholds, and be prone to gaming. We highlight the problem of rule-entrenchment, where analytical choices that are in principle contestable are arbitrarily fixed in practice, masking uncertainty and potentially introducing bias. Strategies for making risk and decision analysis more rigorous include: formalising the assumptions and scope conditions under which heuristics should be applied; testing rather than presuming their underlying empirical or theoretical justifications; using sensitivity analysis, simulations, multiple bias analysis, and deductive systems of inference (e.g. directed acyclic graphs) to characterise rule uncertainty and refine heuristics; adopting "recovery schemes" to correct for known biases
Analysis of complex network performance and heuristic node removal strategies
Jahanpour, Ehsan; Chen, Xin
2013-12-01
Removing important nodes from complex networks is a great challenge in fighting against criminal organizations and preventing disease outbreaks. Six network performance metrics, including four new metrics, are applied to quantify networks' diffusion speed, diffusion scale, homogeneity, and diameter. In order to efficiently identify nodes whose removal maximally destroys a network, i.e., minimizes network performance, ten structured heuristic node removal strategies are designed using different node centrality metrics including degree, betweenness, reciprocal closeness, complement-derived closeness, and eigenvector centrality. These strategies are applied to remove nodes from the September 11, 2001 hijackers' network, and their performance are compared to that of a random strategy, which removes randomly selected nodes, and the locally optimal solution (LOS), which removes nodes to minimize network performance at each step. The computational complexity of the 11 strategies and LOS is also analyzed. Results show that the node removal strategies using degree and betweenness centralities are more efficient than other strategies.
Heuristic Task Analysis on E-Learning Course Development: A Formative Research Study
Lee, Ji-Yeon; Reigeluth, Charles M.
2009-01-01
Utilizing heuristic task analysis (HTA), a method developed for eliciting, analyzing, and representing expertise in complex cognitive tasks, a formative research study was conducted on the task of e-learning course development to further improve the HTA process. Three instructional designers from three different post-secondary institutions in the…
Fitness levels with tail bounds for the analysis of randomized search heuristics
DEFF Research Database (Denmark)
Witt, Carsten
2014-01-01
The fitness-level method, also called the method of f-based partitions, is an intuitive and widely used technique for the running time analysis of randomized search heuristics. It was originally defined to prove upper and lower bounds on the expected running time. Recently, upper tail bounds were...
Analysis of utility-theoretic heuristics for intelligent adaptive network routing
Energy Technology Data Exchange (ETDEWEB)
Mikler, A.R.; Honavar, V.; Wong, J.S.K. [Iowa State Univ., Ames, IA (United States)
1996-12-31
Utility theory offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. In this paper, we incrementally develop a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. We present an analysis of properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research.
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Directory of Open Access Journals (Sweden)
Jinggang Chu
2015-05-01
Full Text Available River basin simulation and multi-reservoir optimal operation have been critical for river basin management. Due to the intense interaction between human activities and river basin systems, the river basin model and multi-reservoir operation model are complicated with a large number of parameters. Therefore, fast and stable optimization algorithms are required for river basin management under the changing conditions of climate and current human activities. This study presents a new global optimization algorithm, named as heuristic dynamically dimensioned search with sensitivity information (HDDS-S, to effectively perform river basin simulation and multi-reservoir optimal operation during river basin management. The HDDS-S algorithm is built on the dynamically dimensioned search (DDS algorithm; and has an improved computational efficiency while maintaining its search capacity compared to the original DDS algorithm. This is mainly due to the non-uniform probability assigned to each decision variable on the basis of its changing sensitivity to the optimization objectives during the adaptive change from global to local search with dimensionality reduced. This study evaluates the new algorithm by comparing its performance with the DDS algorithm on a river basin model calibration problem and a multi-reservoir optimal operation problem. The results obtained indicate that the HDDS-S algorithm outperforms the DDS algorithm in terms of search ability and computational efficiency in the two specific problems. In addition; similar to the DDS algorithm; the HDDS-S algorithm is easy to use as it does not require any parameter tuning and automatically adjusts its search to find good solutions given an available computational budget.
Multicriteria diversity analysis. A novel heuristic framework for appraising energy portfolios
Energy Technology Data Exchange (ETDEWEB)
Stirling, Andy [SPRU - Science and Technology Policy Research, Freeman Centre, University of Sussex, Sussex BN1 9QE (United Kingdom)
2010-04-15
This paper outlines a novel general framework for analysing energy diversity. A critical review of different reasons for policy interest reveals that diversity is more than a supply security strategy. There are particular synergies with strategies for transitions to sustainability. Yet - despite much important work - policy analysis tends to address only a subset of the properties of diversity and remains subject to ambiguity, neglect and special pleading. Developing earlier work, the paper proposes a more comprehensive heuristic framework, accommodating a wide range of different disciplinary and socio-political perspectives. It is argued that the associated multicriteria diversity analysis method provides a more systematic, complete and transparent way to articulate disparate perspectives and approaches and so help to inform more robust and accountable policymaking. (author)
LISA Telescope Sensitivity Analysis
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2002-01-01
The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.
Usable guidelines for usable websites? an analysis of five e-government heuristics
Welle Donker-Kuijer, M.C.J.; de Jong, Menno D.T.; Lentz, Leo
2010-01-01
Many government organizations use web heuristics for the quality assurance of their websites. Heuristics may be used by web designers to guide the decisions about a website in development, or by web evaluators to optimize or assess the quality of an existing website. Despite their popularity, very
Minimizing impacts of land use change on ecosystem services using multi-criteria heuristic analysis.
Keller, Arturo A; Fournier, Eric; Fox, Jessica
2015-06-01
Development of natural landscapes to support human activities impacts the capacity of the landscape to provide ecosystem services. Typically, several ecosystem services are impacted at a single development site and various footprint scenarios are possible, thus a multi-criteria analysis is needed. Restoration potential should also be considered for the area surrounding the permanent impact site. The primary objective of this research was to develop a heuristic approach to analyze multiple criteria (e.g. impacts to various ecosystem services) in a spatial configuration with many potential development sites. The approach was to: (1) quantify the magnitude of terrestrial ecosystem service (biodiversity, carbon sequestration, nutrient and sediment retention, and pollination) impacts associated with a suite of land use change scenarios using the InVEST model; (2) normalize results across categories of ecosystem services to allow cross-service comparison; (3) apply the multi-criteria heuristic algorithm to select sites with the least impact to ecosystem services, including a spatial criterion (separation between sites). As a case study, the multi-criteria impact minimization algorithm was applied to InVEST output to select 25 potential development sites out of 204 possible locations (selected by other criteria) within a 24,000 ha property. This study advanced a generally applicable spatial multi-criteria approach for 1) considering many land use footprint scenarios, 2) balancing impact decisions across a suite of ecosystem services, and 3) determining the restoration potential of ecosystem services after impacts. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Sunstein, Cass R
2005-08-01
With respect to questions of fact, people use heuristics--mental short-cuts, or rules of thumb, that generally work well, but that also lead to systematic errors. People use moral heuristics too--moral short-cuts, or rules of thumb, that lead to mistaken and even absurd moral judgments. These judgments are highly relevant not only to morality, but to law and politics as well. examples are given from a number of domains, including risk regulation, punishment, reproduction and sexuality, and the act/omission distinction. in all of these contexts, rapid, intuitive judgments make a great deal of sense, but sometimes produce moral mistakes that are replicated in law and policy. One implication is that moral assessments ought not to be made by appealing to intuitions about exotic cases and problems; those intuitions are particularly unlikely to be reliable. Another implication is that some deeply held moral judgments are unsound if they are products of moral heuristics. The idea of error-prone heuristics is especially controversial in the moral domain, where agreement on the correct answer may be hard to elicit; but in many contexts, heuristics are at work and they do real damage. Moral framing effects, including those in the context of obligations to future generations, are also discussed.
MOVES regional level sensitivity analysis
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Heuristic Analysis In Architecture Of Aqa-Bozorg Mosque-School In Qajar Dynasty
Directory of Open Access Journals (Sweden)
Azarafrooz Hosseini
2016-12-01
Full Text Available Architecture during Qajar dynasty has witnessed significant developments. The change which was particularly prevalent in advanced form, was the combination of two function: mosque and school. An important issue in the mosque-school typology is the spatial layout of the space, so that the two functions could maintain their independence and do not cause flaws to each other. The aim of this study is to understand how the combination of religious and educational functions in one building is. In this research Aqa-Bozorg mosque-school is analyzed by heuristic analysis method in order to recognize the different factors such as space and quality of human cognition. The result shows that this place with religious function, is not limited to religious ceremonies, vast assemblies with social or political motivation, rather it could be known as set of usual belief or hidden ones which are existed in profound layer of thinking and culture of society. So not only formal speech or sermon, rather customs, architectural features, art sights and even arrangement of main features in a religious building could convey implication to the audience who are consciously or unconsciously affected and make their ideology based on this.
Yoav Ganzach
2009-01-01
Numerical predictions are of central interest for both coherence-based approaches to judgment and decisions --- the Heuristic and Biases (HB) program in particular --- and to correspondence-based approaches --- Social Judgment Theory (SJT). In this paper I examine the way these two approaches study numerical predictions by reviewing papers that use Cue Probability Learning (CPL), the central experimental paradigm for studying numerical predictions in the SJT tradition, while attempting to loo...
Complexity, Heuristic, and Search Analysis for the Games of Crossings and Epaminondas
2014-03-27
research in Artifical Intelligence (Section 2.1) and why games are studied (Section 2.2). Section 2.3 discusses how games are played and solved. An...5 2.1 Games in Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Game Study...Artificial Intelligence UCT Upper Confidence Bounds applied to Trees HUCT Heuristic Guided UCT LOA Lines of Action UCB Upper Confidence Bound RAVE Rapid
Vibration Sensitive Keystroke Analysis
Lopatka, M.; Peetz, M.-H.; van Erp, M.; Stehouwer, H.; van Zaanen, M.
2009-01-01
We present a novel method for performing non-invasive biometric analysis on habitual keystroke patterns using a vibration-based feature space. With the increasing availability of 3-D accelerometer chips in laptop computers, conventional methods using time vectors may be augmented using a distinct
2015-01-01
How can we advance knowledge? Which methods do we need in order to make new discoveries? How can we rationally evaluate, reconstruct and offer discoveries as a means of improving the ‘method’ of discovery itself? And how can we use findings about scientific discovery to boost funding policies, thus fostering a deeper impact of scientific discovery itself? The respective chapters in this book provide readers with answers to these questions. They focus on a set of issues that are essential to the development of types of reasoning for advancing knowledge, such as models for both revolutionary findings and paradigm shifts; ways of rationally addressing scientific disagreement, e.g. when a revolutionary discovery sparks considerable disagreement inside the scientific community; frameworks for both discovery and inference methods; and heuristics for economics and the social sciences.
Directory of Open Access Journals (Sweden)
Milagros Loreto
2016-09-01
Full Text Available The Modified Spectral Projected Subgradient (MSPS was proposed to solve Langrangen Dual Problems, and its convergence was shown when the momentum term was zero. The MSPS uses a momentum term in order to speed up its convergence. The momentum term is built on the multiplication of a momentum parameter and the direction of the previous iterate. In this work, we show convergence when the momentum parameter is a non-zero constant. We also propose heuristics to choose the momentum parameter intended to avoid the Zigzagging Phenomenon of Kind I. This phenomenon is present in the MSPS when at an iterate the subgradient forms an obtuse angle with the previous direction. We identify and diminish the Zigzagging Phenomenon of Kind I on Setcovering problems, and compare our numerical results to those of the original MSPS algorithm.
Phantom pain : A sensitivity analysis
Borsje, Susanne; Bosmans, JC; Van der Schans, CP; Geertzen, JHB; Dijkstra, PU
2004-01-01
Purpose : To analyse how decisions to dichotomise the frequency and impediment of phantom pain into absent and present influence the outcome of studies by performing a sensitivity analysis on an existing database. Method : Five hundred and thirty-six subjects were recruited from the database of an
Gutin, Gregory; Goldengorin, Boris; Huang, Jing
Optimization heuristics are often compared with each other to determine which one performs best by means of worst-case performance ratio reflecting the quality of returned solution in the worst case. The domination number is a complement parameter indicating the quality of the heuristic in hand by
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
de Jong, Menno D.T.; van der Geest, Thea
2000-01-01
This article is intended to make Web designers more aware of the qualities of heuristics by presenting a framework for analyzing the characteristics of heuristics. The framework is meant to support Web designers in choosing among alternative heuristics. We hope that better knowledge of the
Sensitivity Analysis for Activation Problems
Arter, Wayne; Morgan, Guy
2014-06-01
A study has been made as to how to develop further the techniques for sensitivity analysis used by Fispact-II. Fispact-II is a software suite for the analysis of nuclear activation and transmutation problems, developed for all nuclear applications. The software already permits sensitivity analysis to be performed by Monte Carlo sampling, and a faster uncertainty analysis is made possible by a powerful graph-based approach which generates a reduced set of nuclides on pathways leading to significant contributions to radiological quantities. The peculiar aspects of the sensitivity analysis problem for activation are the large number, typically thousands, of rate equation parameters(decay rates and reaction cross-sections) which all have some degree of associated error, and the fact that activity as a function of time varies as a sum of exponentials, so appears discontinuous as rate parameters are varied unless the sampling frequency is impractically fast. Nevertheless, Monte Carlo sampling is a generic approach and it is therefore conceivable that techniques more targeted to the activation problem might be beneficial. Moreover, recent theoretical developments have highlighted the importance of a two-stage approach to mathematically similar problems, where in the first stage, information is collected about the global behaviour of the problem, such as the identification of the rate parameters which cause the greatest variation in dose or nuclear activity, before a second stage examines a problem with its scope restricted by the information from the first. In the second stage, for example, Quasi-Monte Carlo sampling may be used in a restricted parameter space. The current work concentrates on the first stage and consists of a review of possible techniques with a detailed examination of the most promising pathways reduction approach, examined directly using Fispact-II. All the evidence obtained demonstrates the strong potential of this approach.
Chanlen, Niphon
The purpose of this study was to examine the longitudinal impacts of the Science Writing Heuristic (SWH) approach on student science achievement measured by the Iowa Test of Basic Skills (ITBS). A number of studies have reported positive impact of an inquiry-based instruction on student achievement, critical thinking skills, reasoning skills, attitude toward science, etc. So far, studies have focused on exploring how an intervention affects student achievement using teacher/researcher-generated measurement. Only a few studies have attempted to explore the long-term impacts of an intervention on student science achievement measured by standardized tests. The students' science and reading ITBS data was collected from 2000 to 2011 from a school district which had adopted the SWH approach as the main approach in science classrooms since 2002. The data consisted of 12,350 data points from 3,039 students. The multilevel model for change with discontinuity in elevation and slope technique was used to analyze changes in student science achievement growth trajectories prior and after adopting the SWH approach. The results showed that the SWH approach positively impacted students by initially raising science achievement scores. The initial impact was maintained and gradually increased when students were continuously exposed to the SWH approach. Disadvantaged students who were at risk of having low science achievement had bigger benefits from experience with the SWH approach. As a result, existing problematic achievement gaps were narrowed down. Moreover, students who started experience with the SWH approach as early as elementary school seemed to have better science achievement growth compared to students who started experiencing with the SWH approach only in high school. The results found in this study not only confirmed the positive impacts of the SWH approach on student achievement, but also demonstrated additive impacts found when students had longitudinal experiences
Heuristic Search Theory and Applications
Edelkamp, Stefan
2011-01-01
Search has been vital to artificial intelligence from the very beginning as a core technique in problem solving. The authors present a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to real-world problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed. Heuristic search as a problem solving tool is demonstrated in applications for puzzle solving, game playing, constra
UMTS Common Channel Sensitivity Analysis
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António; Santos, Frederico
2006-01-01
The UMTS common transport channels forward access channel (FACH) and the random access channel (RACH) are two of the three fundamental channels for a functional implementation of an UMTS network. Most signaling procedures, such as the registration procedure, make use of these channels...... and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Kumar, Sameer [White Plains, NY; Parker, Jeffrey J [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-06-07
Methods, compute nodes, and computer program products are provided for heuristic status polling of a component in a computing system. Embodiments include receiving, by a polling module from a requesting application, a status request requesting status of a component; determining, by the polling module, whether an activity history for the component satisfies heuristic polling criteria; polling, by the polling module, the component for status if the activity history for the component satisfies the heuristic polling criteria; and not polling, by the polling module, the component for status if the activity history for the component does not satisfy the heuristic criteria.
Morris, Graham P.
2013-12-17
Fully automated and computer assisted heuristic data analysis approaches have been applied to a series of AC voltammetric experiments undertaken on the [Fe(CN)6]3-/4- process at a glassy carbon electrode in 3 M KCl aqueous electrolyte. The recovered parameters in all forms of data analysis encompass E0 (reversible potential), k0 (heterogeneous charge transfer rate constant at E0), α (charge transfer coefficient), Ru (uncompensated resistance), and Cdl (double layer capacitance). The automated method of analysis employed time domain optimization and Bayesian statistics. This and all other methods assumed the Butler-Volmer model applies for electron transfer kinetics, planar diffusion for mass transport, Ohm\\'s Law for Ru, and a potential-independent Cdl model. Heuristic approaches utilize combinations of Fourier Transform filtering, sensitivity analysis, and simplex-based forms of optimization applied to resolved AC harmonics and rely on experimenter experience to assist in experiment-theory comparisons. Remarkable consistency of parameter evaluation was achieved, although the fully automated time domain method provided consistently higher α values than those based on frequency domain data analysis. The origin of this difference is that the implemented fully automated method requires a perfect model for the double layer capacitance. In contrast, the importance of imperfections in the double layer model is minimized when analysis is performed in the frequency domain. Substantial variation in k0 values was found by analysis of the 10 data sets for this highly surface-sensitive pathologically variable [Fe(CN) 6]3-/4- process, but remarkably, all fit the quasi-reversible model satisfactorily. © 2013 American Chemical Society.
Analysis of Heuristics in Business Recruitment based on a Portfolio of Real Derivatives
Directory of Open Access Journals (Sweden)
Silvia Bou Ysás
2014-01-01
Full Text Available Departing from the notion that labour legislation should be founded in the generation of job stability a model for recruitment is presented in which the employer is likened to the holder of an investment portfolio containing two real derivatives – swap or the option of sale. This model allows us on one hand to analyze sensitivity to the variables that are at play in an employment contract and, on the other, look at the effects that the most recent reforms in Spanish labour laws have had on contracting decisions made by employers. The results are clear: social security benefits are shown to be the most sensitive variable on the work contract, and applying the changes proposed in the latest labour reforms, this effect is upheld. The study concludes that reducing the costs of dismissal does not increase the likelihood of employers’ taking on new staff.
A New Heuristic Device for the Analysis of Israel Education: Observations from a Jewish Summer Camp
Sinclair, Alex
2009-01-01
In this article, I propose some new terminology and analytic tools that help us reflect on Israel educational activities with more sophistication. I analyze data from a four-week observation of a Jewish summer camp and new terminology is proposed from the analysis of the data collected during that observation. I argue that we may view Israel…
Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin
2016-10-01
Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.
Directory of Open Access Journals (Sweden)
Hui Zhou
2016-10-01
Full Text Available Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO and heel strike (HS gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.
Probabilistic sensitivity analysis of biochemical reaction systems.
Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John
2009-09-07
Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.
Recursive heuristic classification
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
A comparison of heuristic and model-based clustering methods for dietary pattern analysis.
Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia
2016-02-01
Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Heuristics Considering UX and Quality Criteria for Heuristics
Directory of Open Access Journals (Sweden)
Frederik Bader
2017-12-01
Full Text Available Heuristic evaluation is a cheap tool with which one can take qualitative measures of a product’s usability. However, since the methodology was first presented, the User Experience (UX has become more popular but the heuristics have remained the same. In this paper, we analyse the current state of heuristic evaluation in terms of heuristics for measuring the UX. To do so, we carried out a literature review. In addition, we had a look at different heuristics and mapped them with the UX dimensions of the User Experience Questionnaire (UEQ. Moreover, we proposed a quality model for heuristic evaluation and a list of quality criteria for heuristics.
Pitfalls in Teaching Judgment Heuristics
Shepperd, James A.; Koch, Erika J.
2005-01-01
Demonstrations of judgment heuristics typically focus on how heuristics can lead to poor judgments. However, exclusive focus on the negative consequences of heuristics can prove problematic. We illustrate the problem with the representativeness heuristic and present a study (N = 45) that examined how examples influence understanding of the…
Srinivas, B; Kulick, S N; Doran, Christine; Kulick, Seth
1995-01-01
There are currently two philosophies for building grammars and parsers -- Statistically induced grammars and Wide-coverage grammars. One way to combine the strengths of both approaches is to have a wide-coverage grammar with a heuristic component which is domain independent but whose contribution is tuned to particular domains. In this paper, we discuss a three-stage approach to disambiguation in the context of a lexicalized grammar, using a variety of domain independent heuristic techniques. We present a training algorithm which uses hand-bracketed treebank parses to set the weights of these heuristics. We compare the performance of our grammar against the performance of the IBM statistical grammar, using both untrained and trained weights for the heuristics.
Shape design sensitivity analysis using domain information
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Efficient sensitivity analysis in hidden markov models
National Research Council Canada - National Science Library
Renooij, Silja
2012-01-01
Sensitivity analysis in hidden Markov models (HMMs) is usually performed by means of a perturbation analysis where a small change is applied to the model parameters, upon which the output of interest is re-computed...
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the
Heuristic thinking makes a chemist smart.
Graulich, Nicole; Hopf, Henning; Schreiner, Peter R
2010-05-01
We focus on the virtually neglected use of heuristic principles in understanding and teaching of organic chemistry. As human thinking is not comparable to computer systems employing factual knowledge and algorithms--people rarely make decisions through careful considerations of every possible event and its probability, risks or usefulness--research in science and teaching must include psychological aspects of the human decision making processes. Intuitive analogical and associative reasoning and the ability to categorize unexpected findings typically demonstrated by experienced chemists should be made accessible to young learners through heuristic concepts. The psychology of cognition defines heuristics as strategies that guide human problem-solving and deciding procedures, for example with patterns, analogies, or prototypes. Since research in the field of artificial intelligence and current studies in the psychology of cognition have provided evidence for the usefulness of heuristics in discovery, the status of heuristics has grown into something useful and teachable. In this tutorial review, we present a heuristic analysis of a familiar fundamental process in organic chemistry--the cyclic six-electron case, and we show that this approach leads to a more conceptual insight in understanding, as well as in teaching and learning.
Heuristics and bias in rectal surgery.
MacDermid, Ewan; Young, Christopher J; Moug, Susan J; Anderson, Robert G; Shepherd, Heather L
2017-08-01
Deciding to defunction after anterior resection can be difficult, requiring cognitive tools or heuristics. From our previous work, increasing age and risk-taking propensity were identified as heuristic biases for surgeons in Australia and New Zealand (CSSANZ), and inversely proportional to the likelihood of creating defunctioning stomas. We aimed to assess these factors for colorectal surgeons in the British Isles, and identify other potential biases. The Association of Coloproctology of Great Britain and Ireland (ACPGBI) was invited to complete an online survey. Questions included demographics, risk-taking propensity, sensitivity to professional criticism, self-perception of anastomotic leak rate and propensity for creating defunctioning stomas. Chi-squared testing was used to assess differences between ACPGBI and CSSANZ respondents. Multiple regression analysis identified independent surgeon predictors of stoma formation. One hundred fifty (19.2%) eligible members of the ACPGBI replied. Demographics between ACPGBI and CSSANZ groups were well-matched. Significantly more ACPGBI surgeons admitted to anastomotic leak in the last year (p < 0.001). ACPGBI surgeon age over 50 (p = 0.02), higher risk-taking propensity across several domains (p = 0.044), self-belief in a lower-than-average anastomotic leak rate (p = 0.02) and belief that the average risk of leak after anterior resection is 8% or lower (p = 0.007) were all independent predictors of less frequent stoma formation. Sensitivity to criticism from colleagues was not a predictor of stoma formation. Unrecognised surgeon factors including age, everyday risk-taking, self-belief in surgical ability and lower probability bias of anastomotic leak appear to exert an effect on decision-making in rectal surgery.
Sensitivity analysis in life cycle assessment
Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.
2014-01-01
Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
A new algorithm for numerical sensitivity analysis of ordinary differential equations (ODEs) is presented. The underlying ODE solver belongs to the Runge-Kutta family. The algorithm calculates sensitivities with respect to problem parameters and initial conditions, exploiting the special structure...
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
A heuristic evaluation of the Facebook's advertising tool beacon
Jamal, A; Cole, M
2009-01-01
Interface usability is critical to the successful adoption of information systems. The aim of this study is to evaluate interface of Facebook's advertising tool Beacon by using privacy heuristics [4]. Beacon represents an interesting case study because of the negative media and user backlash it received. The findings of heuristic evaluation suggest violation of privacy heuristics [4]. Here, analysis identified concerns about user choice and consent, integrity and security of data, and awarene...
Sensitivity analysis for solar plates
Aster, R. W.
1986-02-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Intelligent process mapping through systematic improvement of heuristics
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
An extension for dynamic lot-sizing heuristics
Directory of Open Access Journals (Sweden)
Fabian G. Beck
2015-01-01
Full Text Available This paper presents an efficient procedure to extend dynamic lot-sizing heuristics that has been overlooked by inventory management literature and practice. Its intention is to show that the extension improves the results of basic heuristics significantly. We first present a comprehensive description of the extension procedure and then test its performance in an extensive numerical study. Our analysis shows that the extension is an efficient tool to improve basic dynamic lot-sizing heuristics. The results of the paper may be used in inventory management to assist researchers in selecting dynamic lot-sizing heuristics and may be of help for practitioners as decision support.
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a single first order ordinary differential equation. In this analysis, we will study the sensitivity analysis due to a variation of the initial condition ...
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
Wójcicki, Tomasz; Nowicki, Michał
2016-04-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed.
Conspicuous Waste and Representativeness Heuristic
Directory of Open Access Journals (Sweden)
Tatiana M. Shishkina
2017-12-01
Full Text Available The article deals with the similarities between conspicuous waste and representativeness heuristic. The conspicuous waste is analyzed according to the classic Veblen’ interpretation as a strategy to increase social status through conspicuous consumption and conspicuous leisure. In “The Theory of the Leisure Class” Veblen introduced two different types of utility – conspicuous and functional. The article focuses on the possible benefits of the analysis of conspicuous utility not only in terms of institutional economic theory, but also in terms of behavioral economics. To this end, the representativeness heuristics is considered, on the one hand, as a way to optimize the decision-making process, which allows to examine it in comparison with procedural rationality by Simon. On the other hand, it is also analyzed as cognitive bias within the Kahneman and Twersky’ approach. The article provides the analysis of the patterns in the deviations from the rational behavior strategy that could be observed in case of conspicuous waste both in modern market economies in the form of conspicuous consumption and in archaic economies in the form of gift-exchange. The article also focuses on the marketing strategies for luxury consumption’ advertisement. It highlights the impact of the symbolic capital (in Bourdieu’ interpretation on the social and symbolic payments that actors get from the act of conspicuous waste. This allows to perform a analysis of conspicuous consumption both as a rational way to get the particular kind of payments, and, at the same time, as a form of institutionalized cognitive bias.
Sensitivity analysis in quantitative microbial risk assessment.
Zwieterin, M H; van Gerwen, S J
2000-07-15
The occurrence of foodborne disease remains a widespread problem in both the developing and the developed world. A systematic and quantitative evaluation of food safety is important to control the risk of foodborne diseases. World-wide, many initiatives are being taken to develop quantitative risk analysis. However, the quantitative evaluation of food safety in all its aspects is very complex, especially since in many cases specific parameter values are not available. Often many variables have large statistical variability while the quantitative effect of various phenomena is unknown. Therefore, sensitivity analysis can be a useful tool to determine the main risk-determining phenomena, as well as the aspects that mainly determine the inaccuracy in the risk estimate. This paper presents three stages of sensitivity analysis. First, deterministic analysis selects the most relevant determinants for risk. Overlooking of exceptional, but relevant cases is prevented by a second, worst-case analysis. This analysis finds relevant process steps in worst-case situations, and shows the relevance of variations of factors for risk. The third, stochastic analysis, studies the effects of variations of factors for the variability of risk estimates. Care must be taken that the assumptions made as well as the results are clearly communicated. Stochastic risk estimates are, like deterministic ones, just as good (or bad) as the available data, and the stochastic analysis must not be used to mask lack of information. Sensitivity analysis is a valuable tool in quantitative risk assessment by determining critical aspects and effects of variations.
Heuristic decision making in medicine.
Marewski, Julian N; Gigerenzer, Gerd
2012-03-01
Can less information be more helpful when it comes to making medical decisions? Contrary to the common intuition that more information is always better, the use of heuristics can help both physicians and patients to make sound decisions. Heuristics are simple decision strategies that ignore part of the available information, basing decisions on only a few relevant predictors. We discuss: (i) how doctors and patients use heuristics; and (ii) when heuristics outperform information-greedy methods, such as regressions in medical diagnosis. Furthermore, we outline those features of heuristics that make them useful in health care settings. These features include their surprising accuracy, transparency, and wide accessibility, as well as the low costs and little time required to employ them. We close by explaining one of the statistical reasons why heuristics are accurate, and by pointing to psychiatry as one area for future research on heuristics in health care.
Heuristic decision making in medicine
Marewski, Julian N.; Gigerenzer, Gerd
2012-01-01
Can less information be more helpful when it comes to making medical decisions? Contrary to the common intuition that more information is always better, the use of heuristics can help both physicians and patients to make sound decisions. Heuristics are simple decision strategies that ignore part of the available information, basing decisions on only a few relevant predictors. We discuss: (i) how doctors and patients use heuristics; and (ii) when heuristics outperform information-greedy methods, such as regressions in medical diagnosis. Furthermore, we outline those features of heuristics that make them useful in health care settings. These features include their surprising accuracy, transparency, and wide accessibility, as well as the low costs and little time required to employ them. We close by explaining one of the statistical reasons why heuristics are accurate, and by pointing to psychiatry as one area for future research on heuristics in health care. PMID:22577307
An ESDIRK method with sensitivity analysis capabilities
Energy Technology Data Exchange (ETDEWEB)
Rode Kristensen, M. [2-Control ApS, Valby (Denmark); Bagterp Joergensen, J. [Institut for Kemiteknik - DTU, Kgs. Lyngby (Denmark); Grove Thomsen, P. [Informatics and Mathematical Modelling - DTU, Kgs. Lyngby (Denmark); Bay Joergensen, S. [Department of Chemical Engineering - DTU, Kgs. Lyngby (Denmark)
2006-07-01
A new algorithm for numerical sensitivity analysis of ordinary differential equations (ODEs) is presented. The underlying ODE solver belongs to the Runge-Kutta family. The algorithm calculates sensitivities with respect to problem parameters and initial conditions, exploiting the special structure of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through w e studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches. Several advantages of the new approach are demonstrated, especially when frequent discontinuities are present, which renders the new algorithm particularly suitable for dynamic optimization purposes. (au)
Directory of Open Access Journals (Sweden)
Viktor Ivanovich Petrov
2017-01-01
Full Text Available The article considers the issues of civil aviation aircraft onboard computers data safety. Infor- mation security undeclared capabilities stand for technical equipment or software possibilities, which are not mentioned in the documentation. Documentation and tests content requirements are imposed during the software certification. Documentation requirements include documents composition and content of control (specification, description and program code, the source code. Test requirements include: static analysis of program codes (including the compliance of the sources with their loading modules monitoring; dynamic analysis of source code (including implementation of routes monitor- ing. Currently, there are no complex measures for checking onboard computer software. There are no rules and regulations that can allow controlling foreign production aircraft software, and the actual receiving of software is difficult. Consequently, the author suggests developing the basics of aviation rules and regulations, which allow to analyze the programs of CA aircraft onboard computers. If there are no software source codes the two approaches of code analysis are used: a structural static and dy- namic analysis of the source code; signature-heuristic analysis of potentially dangerous operations. Static analysis determines the behavior of the program by reading the program code (without running the program which is represented in the assembler language - disassembly listing. Program tracing is performed by the dynamic analysis. The analysis of aircraft software ability to detect undeclared capa- bilities using the interactive disassembler was considered in this article.
Heuristic of radiodiagnostic systems
Energy Technology Data Exchange (ETDEWEB)
Wackenheim, A.
1986-12-01
In the practice of creating expert systems, the radiologist and his team are considered as the expert who leads the job of the cognitian or cognitician. Different kinds of expert systems can be imagined. The author describes the main characteristics of heuristics in redefining semiology, semantics and rules of picture reading. Finally it is the experience of the couple expert and cognitician which will in the futur grant for the success of expert systems in radiology.
A heuristic for the inventory management of smart vending machine systems
Directory of Open Access Journals (Sweden)
Yang-Byung Park
2012-12-01
Full Text Available Purpose: The purpose of this paper is to propose a heuristic for the inventory management of smart vending machine systems with product substitution under the replenishment point, order-up-to level policy and to evaluate its performance.Design/methodology/approach: The heuristic is developed on the basis of the decoupled approach. An integer linear mathematical model is built to determine the number of product storage compartments and replenishment threshold for each smart vending machine in the system and the Clarke and Wright’s savings algorithm is applied to route vehicles for inventory replenishments of smart vending machines that share the same delivery days. Computational experiments are conducted on several small-size test problems to compare the proposed heuristic with the integrated optimization mathematical model with respect to system profit. Furthermore, a sensitivity analysis is carried out on a medium-size test problem to evaluate the effect of the customer service level on system profit using a computer simulation.Findings: The results show that the proposed heuristic yielded pretty good solutions with 5.7% error rate on average compared to the optimal solutions. The proposed heuristic took about 3 CPU minutes on average in the test problems being consisted of 10 five-product smart vending machines. It was confirmed that the system profit is significantly affected by the customer service level.Originality/value: The inventory management of smart vending machine systems is newly treated. Product substitutions are explicitly considered in the model. The proposed heuristic is effective as well as efficient. It can be easily modified for application to various retail vending settings under a vendor-managed inventory scheme with POS system.
Automated Sensitivity Analysis of Interplanetary Trajectories
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
*Corresponding Author Sensitivity Analysis of a Physiochemical ...
African Journals Online (AJOL)
Michael Horsfall
we considered two different self-written Matlab programs. The first program concerns the calculation of the PR solution trajectory when none of the four .... Sensitivity analysis based methodology to estimate the best set of parameters for heterogeneous kinetic models,. Chemical Engineering Journal 128(2-3), 85-93. Amod S ...
Sensitivity Analysis for the EEG Forward Problem
Directory of Open Access Journals (Sweden)
Maria I Troparevsky
2010-09-01
Full Text Available Sensitivity Analysis can provide useful information when one is interested in identifying the parameter $theta$ of a system since it measures the variations of the output $u$ when $theta$ changes. In the literature two different sensitivity functions are frequently used: the Traditional Sensitivity Functions (TSF and the Generalized Sensitivity Functions (GSF. They can help to determine the time instants where the output of a dynamical system has more information about the value of its parameters in order to carry on an estimation process. Both functions were considered by some authors who compared their results for different dynamical systems (see textit{Banks 2008, Banks 2001, Kappel 2006}. In this work we apply the TSF and the GSF to analyze the sensitivity of the 3D Poisson-type equation with interfaces of the Forward Problem of Electroencephalography (EEG. In a simple model where we consider the head as a volume consisting of nested homogeneous sets, we establish the differential equations that correspond to TSF with respect to the value of the conductivity of the different tissues and deduce the corresponding Integral Equations. Afterwards we compute the GSF for the same model. We perform some numerical experiments for both types of sensitivity functions and compare the results.
Investigating the Impacts of Design Heuristics on Idea Initiation and Development
Kramer, Julia; Daly, Shanna R.; Yilmaz, Seda; Seifert, Colleen M.; Gonzalez, Richard
2015-01-01
This paper presents an analysis of engineering students' use of Design Heuristics as part of a team project in an undergraduate engineering design course. Design Heuristics are an empirically derived set of cognitive "rules of thumb" for use in concept generation. We investigated heuristic use in the initial concept generation phase,…
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Heuristics for the inversion median problem
2010-01-01
Background The study of genome rearrangements has become a mainstay of phylogenetics and comparative genomics. Fundamental in such a study is the median problem: given three genomes find a fourth that minimizes the sum of the evolutionary distances between itself and the given three. Many exact algorithms and heuristics have been developed for the inversion median problem, of which the best known is MGR. Results We present a unifying framework for median heuristics, which enables us to clarify existing strategies and to place them in a partial ordering. Analysis of this framework leads to a new insight: the best strategies continue to refer to the input data rather than reducing the problem to smaller instances. Using this insight, we develop a new heuristic for inversion medians that uses input data to the end of its computation and leverages our previous work with DCJ medians. Finally, we present the results of extensive experimentation showing that our new heuristic outperforms all others in accuracy and, especially, in running time: the heuristic typically returns solutions within 1% of optimal and runs in seconds to minutes even on genomes with 25'000 genes--in contrast, MGR can take days on instances of 200 genes and cannot be used beyond 1'000 genes. Conclusion Finding good rearrangement medians, in particular inversion medians, had long been regarded as the computational bottleneck in whole-genome studies. Our new heuristic for inversion medians, ASM, which dominates all others in our framework, puts that issue to rest by providing near-optimal solutions within seconds to minutes on even the largest genomes. PMID:20122203
Benchmarking Heuristic Search and Optimisation Algorithms in Matlab
Luo, Wuqiao; Li, Yun
2016-01-01
With the proliferating development of heuristic methods, it has become challenging to choose the most suitable ones for an application at hand. This paper evaluates the performance of these algorithms available in Matlab, as it is problem dependent and parameter sensitive. Further, the paper attempts to address the challenge that there exists no satisfied benchmarks to evaluation all the algorithms at the same standard. The paper tests five heuristic algorithms in Matlab, the Nelder-Mead simp...
A Tutorial on Heuristic Methods
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui; Werra, D. de; Silver, E.
1980-01-01
In this paper we define a heuristic method as a procedure for solving a well-defined mathematical problem by an intuitive approach in which the structure of the problem can be interpreted and exploited intelligently to obtain a reasonable solution. Issues discussed include: (i) the measurement...... of the quality of a heuristic method, (ii) different types of heuristic procedures, (iii) the interactive role of human beings and (iv) factors that may influence the choice or testing of heuristic methods. A large number of references are included....
Using Granular-Evidence-Based Adaptive Networks for Sensitivity Analysis
Vališevskis, A.
2002-01-01
This paper considers the possibility of using adaptive networks for sensitivity analysis. Adaptive network that processes fuzzy granules is described. The adaptive network training algorithm can be used for sensitivity analysis of decision making models. Furthermore, a case study concerning sensitivity analysis is described, which shows in what way the adaptive network can be used for sensitivity analysis.
THE HEURISTIC FUNCTION OF SPORT
Directory of Open Access Journals (Sweden)
Adam Petrović
2012-09-01
Full Text Available Being a significant area of human activity, sport has multiple functions. One of the more important functions of sport, especially top sport, is the inventive heuristic function. Creative work, being a process of creating new values, represents a significant possibility for advancement of sport. This paper aims at pointing at the various dimensions of human creative work, at the creative work which can be seen in sport (in a narrow sense and at the scientific and practical areas which borderline sport. The method of theoretical analysis of different approaches to the phenomenon of creative work , both in general and in sport, was applied in this paper. This area can be systematized according to various criterion : the level of creative work, different fields where it appears, the subjects of creative work - creators etc. Case analysis shows that the field of creative work in sport is widening and deepening constantly. There are different levels of creativity not only in the system of training and competition, but in a wider social context of sport as well. As a process of human spirit and mind the creative work belongs not just to athletes and coaches, but also to all the people and social groups who's creative power manifests itself in sport. The classification of creative work in sport according to various criterion allows for heuristic function of sport to be explained comprehensively and to create an image how do the sparks of human spirit improve the micro cosmos of sport. A thorough classification of creative work in sport allows for a detailed analysis of all the elements of creative work and each of it’s area in sport. In this way the progress in sport , as a consequence of innovations in both competitions and athletes’ training and of everything that goes with those activities, can be guided into the needed direction more easily as well as studied and applied.
Reexamining Our Bias against Heuristics
McLaughlin, Kevin; Eva, Kevin W.; Norman, Geoff R.
2014-01-01
Using heuristics offers several cognitive advantages, such as increased speed and reduced effort when making decisions, in addition to allowing us to make decision in situations where missing data do not allow for formal reasoning. But the traditional view of heuristics is that they trade accuracy for efficiency. Here the authors discuss sources…
Heuristics structure and pervade formal risk assessment.
MacGillivray, Brian H
2014-04-01
Lay perceptions of risk appear rooted more in heuristics than in reason. A major concern of the risk regulation literature is that such "error-strewn" perceptions may be replicated in policy, as governments respond to the (mis)fears of the citizenry. This has led many to advocate a relatively technocratic approach to regulating risk, characterized by high reliance on formal risk and cost-benefit analysis. However, through two studies of chemicals regulation, we show that the formal assessment of risk is pervaded by its own set of heuristics. These include rules to categorize potential threats, define what constitutes valid data, guide causal inference, and to select and apply formal models. Some of these heuristics lay claim to theoretical or empirical justifications, others are more back-of-the-envelope calculations, while still more purport not to reflect some truth but simply to constrain discretion or perform a desk-clearing function. These heuristics can be understood as a way of authenticating or formalizing risk assessment as a scientific practice, representing a series of rules for bounding problems, collecting data, and interpreting evidence (a methodology). Heuristics are indispensable elements of induction. And so they are not problematic per se, but they can become so when treated as laws rather than as contingent and provisional rules. Pitfalls include the potential for systematic error, masking uncertainties, strategic manipulation, and entrenchment. Our central claim is that by studying the rules of risk assessment qua rules, we develop a novel representation of the methods, conventions, and biases of the prior art. © 2013 Society for Risk Analysis.
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Heuristics to Evaluate Interactive Systems for Children with Autism Spectrum Disorder (ASD.
Directory of Open Access Journals (Sweden)
Kamran Khowaja
Full Text Available In this paper, we adapted and expanded a set of guidelines, also known as heuristics, to evaluate the usability of software to now be appropriate for software aimed at children with autism spectrum disorder (ASD. We started from the heuristics developed by Nielsen in 1990 and developed a modified set of 15 heuristics. The first 5 heuristics of this set are the same as those of the original Nielsen set, the next 5 heuristics are improved versions of Nielsen's, whereas the last 5 heuristics are new. We present two evaluation studies of our new heuristics. In the first, two groups compared Nielsen's set with the modified set of heuristics, with each group evaluating two interactive systems. The Nielsen's heuristics were assigned to the control group while the experimental group was given the modified set of heuristics, and a statistical analysis was conducted to determine the effectiveness of the modified set, the contribution of 5 new heuristics and the impact of 5 improved heuristics. The results show that the modified set is significantly more effective than the original, and we found a significant difference between the five improved heuristics and their corresponding heuristics in the original set. The five new heuristics are effective in problem identification using the modified set. The second study was conducted using a system which was developed to ascertain if the modified set was effective at identifying usability problems that could be fixed before the release of software. The post-study analysis revealed that the majority of the usability problems identified by the experts were fixed in the updated version of the system.
Heuristics to Evaluate Interactive Systems for Children with Autism Spectrum Disorder (ASD).
Khowaja, Kamran; Salim, Siti Salwah; Asemi, Adeleh
2015-01-01
In this paper, we adapted and expanded a set of guidelines, also known as heuristics, to evaluate the usability of software to now be appropriate for software aimed at children with autism spectrum disorder (ASD). We started from the heuristics developed by Nielsen in 1990 and developed a modified set of 15 heuristics. The first 5 heuristics of this set are the same as those of the original Nielsen set, the next 5 heuristics are improved versions of Nielsen's, whereas the last 5 heuristics are new. We present two evaluation studies of our new heuristics. In the first, two groups compared Nielsen's set with the modified set of heuristics, with each group evaluating two interactive systems. The Nielsen's heuristics were assigned to the control group while the experimental group was given the modified set of heuristics, and a statistical analysis was conducted to determine the effectiveness of the modified set, the contribution of 5 new heuristics and the impact of 5 improved heuristics. The results show that the modified set is significantly more effective than the original, and we found a significant difference between the five improved heuristics and their corresponding heuristics in the original set. The five new heuristics are effective in problem identification using the modified set. The second study was conducted using a system which was developed to ascertain if the modified set was effective at identifying usability problems that could be fixed before the release of software. The post-study analysis revealed that the majority of the usability problems identified by the experts were fixed in the updated version of the system.
Sensitivity Analysis of Selected DIVOPS Input Factors
1977-12-01
v40. .............. o..... ....... H-3 viii CAA- TD -77-9 SENSITIVITY ANALYSIS OF SELECTED DIVOPS INPUT FACTORS CHAPTER 1 INTRODUCTION 1-1. BACKGROUND...freedom squares A 5 l,149 15,j74 7.93** S.33U, 1jj lb5,Ubb bl.17-* - 2 47,411 23,7U5 7.35**- L) Z 48,9b5 124,493 38.59** E b5b,423 1b7,711 79.2:*** F 2
Theory of Randomized Search Heuristics in Combinatorial Optimization
DEFF Research Database (Denmark)
such as time, money, or knowledge to obtain good specific algorithms. It is widely acknowledged that a solid mathematical foundation for such heuristics is needed. Most designers of RSHs, however, rather focused on mimicking processes in nature (such as evolution) rather than making the heuristics amenable......The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS...... to a mathematical analysis. This is different to the classical design of (randomized) algorithms which are developed with their theoretical analysis of runtime (and proof of correctness) in mind. Despite these obstacles, research from the last about 15 years has shown how to apply the methods for the probabilistic...
Directory of Open Access Journals (Sweden)
M. Omidvari
2015-09-01
Full Text Available Introduction: Occupational accidents are of the main issues in industries. It is necessary to identify the main root causes of accidents for their control. Several models have been proposed for determining the accidents root causes. FTA is one of the most widely used models which could graphically establish the root causes of accidents. The non-linear function is one of the main challenges in FTA compliance and in order to obtain the exact number, the meta-heuristic algorithms can be used. Material and Method: The present research was done in power plant industries in construction phase. In this study, a pattern for the analysis of human error in work-related accidents was provided by combination of neural network algorithms and FTA analytical model. Finally, using this pattern, the potential rate of all causes was determined. Result: The results showed that training, age, and non-compliance with safety principals in the workplace were the most important factors influencing human error in the occupational accident. Conclusion: According to the obtained results, it can be concluded that human errors can be greatly reduced by training, right choice of workers with regard to the type of occupations, and provision of appropriate safety conditions in the work place.
Dynamic Resonance Sensitivity Analysis in Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2017-01-01
Unlike conventional power systems, where resonance frequencies are mainly determined by passive impedances, wind farms present a more complex situation, where the control systems of the power electronic converters introduce also active impedances. This paper presents an approach to find...... the resonance frequencies of wind farms by considering both active and passive impedances and to identify which bus of the wind farm has more contribution to resonance frequencies. In the approach, a wind farm is introduced as a Multi-Input Multi-Output (MIMO) dynamic system and the bus Participation Factors...... (PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...
Topological sensitivity analysis for systems biology.
Babtie, Ann C; Kirk, Paul; Stumpf, Michael P H
2014-12-30
Mathematical models of natural systems are abstractions of much more complicated processes. Developing informative and realistic models of such systems typically involves suitable statistical inference methods, domain expertise, and a modicum of luck. Except for cases where physical principles provide sufficient guidance, it will also be generally possible to come up with a large number of potential models that are compatible with a given natural system and any finite amount of data generated from experiments on that system. Here we develop a computational framework to systematically evaluate potentially vast sets of candidate differential equation models in light of experimental and prior knowledge about biological systems. This topological sensitivity analysis enables us to evaluate quantitatively the dependence of model inferences and predictions on the assumed model structures. Failure to consider the impact of structural uncertainty introduces biases into the analysis and potentially gives rise to misleading conclusions.
Simple Sensitivity Analysis for Orion GNC
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. Copyright © 2012 Elsevier B.V. All rights reserved.
A Sensitivity Analysis of SOLPS Plasma Detachment
Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration
2016-10-01
Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.
Updated Chemical Kinetics and Sensitivity Analysis Code
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Is notch sensitivity a stress analysis problem?
Directory of Open Access Journals (Sweden)
Jaime Tupiassú Pinho de Castro
2013-07-01
Full Text Available Semi–empirical notch sensitivity factors q have been widely used to properly account for notch effects in fatigue design for a long time. However, the intrinsically empirical nature of this old concept can be avoided by modeling it using sound mechanical concepts that properly consider the influence of notch tip stress gradients on the growth behavior of mechanically short cracks. Moreover, this model requires only well-established mechanical properties, as it has no need for data-fitting or similar ill-defined empirical parameters. In this way, the q value can now be calculated considering the characteristics of the notch geometry and of the loading, as well as the basic mechanical properties of the material, such as its fatigue limit and crack propagation threshold, if the problem is fatigue, or its equivalent resistances to crack initiation and to crack propagation under corrosion conditions, if the problem is environmentally assisted or stress corrosion cracking. Predictions based on this purely mechanical model have been validated by proper tests both in the fatigue and in the SCC cases, indicating that notch sensitivity can indeed be treated as a stress analysis problem.
Four Data Visualization Heuristics to Facilitate Reflection in Personal Informatics
DEFF Research Database (Denmark)
Cuttone, Andrea; Petersen, Michael Kai; Larsen, Jakob Eg
2014-01-01
In this paper we discuss how to facilitate the process of reflection in Personal Informatics and Quantified Self systems through interactive data visualizations. Four heuristics for the design and evaluation of such systems have been identified through analysis of self-tracking devices and apps...... in financial analytics, it is discussed how the heuristics could guide designs that would further facilitate reflection in self-tracking personal informatics systems....
A sensitivity analysis on the TCIF model
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.
2009-09-01
Recent developments on theoretically derived distributions have highlighted the role of dominant runoff generation mechanisms as key signatures for providing insights in hydrologic similarity. Gioia et al (2008) introduced a novel distribution of flood peak annual maxima, named TCIF, based on two different threshold mechanisms, associated respectively to ordinary and extraordinary events. Indeed, ordinary floods are mostly due to rainfall events exceeding a threshold infiltration rate in a small source area, while the so-called outlier events, responsible of the high skewness of flood distributions, are triggered when severe rainfalls exceed a threshold storage in a large portion of the basin. Within this scheme, a sensitivity analysis is performed in order to provide insights in catchment classification and process conceptualization. Gioia, A., V. Iacobellis, S. Manfreda, M. Fiorentino, Runoff thresholds in derived flood frequency distributions, Hydrol. Earth Syst. Sci., 12, 1295-1307, 2008.
Structural Sustainability - Heuristic Approach
Rostański, Krzysztof
2017-10-01
Nowadays, we are faced with a challenge of having to join building structures with elements of nature, which seems to be the paradigm of modern planning and design. The questions arise, however, with reference to the following categories: the leading idea, the relation between elements of nature and buildings, the features of a structure combining such elements and, finally, our perception of this structure. If we consider both the overwhelming globalization and our attempts to preserve local values, the only reasonable solution is to develop naturalistic greenery. It can add its uniqueness to any building and to any developed area. Our holistic model, presented in this paper, contains the above mentioned categories within the scope of naturalism. The model is divided into principles, actions related, and possible effects to be obtained. It provides a useful tool for determining the ways and priorities of our design. Although it is not possible to consider all possible actions and solutions in order to support sustainability in any particular design, we can choose, however, a proper mode for our design according to the local conditions by turning to the heuristic method, which helps to choose priorities and targets. Our approach is an attempt to follow the ways of nature as in the natural environment it is optimal solutions that appear and survive, idealism being the domain of mankind only. We try to describe various natural processes in a manner comprehensible to us, which is always a generalization. Such definitions, however, called artificial by naturalists, are presented as art or the current state of knowledge by artists and engineers. Reality, in fact, is always more complicated than its definitions. The heuristic method demonstrates the way how to optimize our design. It requires that all possible information about the local environment should be gathered, as the more is known, the fewer mistakes are made. Following the unquestionable principles, we can
Heuristic Methods for Security Protocols
Directory of Open Access Journals (Sweden)
Qurat ul Ain Nizamani
2009-10-01
Full Text Available Model checking is an automatic verification technique to verify hardware and software systems. However it suffers from state-space explosion problem. In this paper we address this problem in the context of cryptographic protocols by proposing a security property-dependent heuristic. The heuristic weights the state space by exploiting the security formulae; the weights may then be used to explore the state space when searching for attacks.
Longitudinal Genetic Analysis of Anxiety Sensitivity
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
A Heuristic Model for Supporting Users’ Decision-Making in Privacy Disclosure for Recommendation
Directory of Open Access Journals (Sweden)
Hongchen Wu
2018-01-01
Full Text Available Privacy issues have become a major concern in the web of resource sharing, and users often have difficulty managing their information disclosure in the context of high-quality experiences from social media and Internet of Things. Recent studies have shown that users’ disclosure decisions may be influenced by heuristics from the crowds, leading to inconsistency in the disclosure volumes and reduction of the prediction accuracy. Therefore, an analysis of why this influence occurs and how to optimize the user experience is highly important. We propose a novel heuristic model that defines the data structures of items and participants in social media, utilizes a modified decision-tree classifier that can predict participants’ disclosures, and puts forward a correlation analysis for detecting disclosure inconsistences. The heuristic model is applied to real-time dataset to evaluate the behavioral effects. Decision-tree classifier and correlation analysis indeed prove that some participants’ behaviors in information disclosures became decreasingly correlated during item requesting. Participants can be “persuaded” to change their disclosure behaviors, and the users’ answers to the mildly sensitive items tend to be more variable and less predictable. Using this approach, recommender systems in social media can thus know the users better and provide service with higher prediction accuracy.
Heuristic space diversity control for improved meta-hyper-heuristic performance
CSIR Research Space (South Africa)
Grobler, J
2015-04-01
Full Text Available This paper expands on the concept of heuristic space diversity and investigates various strategies for the management of heuristic space diversity within the context of a meta-hyper-heuristic algorithm in search of greater performance benefits...
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
Wear-Out Sensitivity Analysis Project Abstract
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Tilt-Sensitivity Analysis for Space Telescopes
Papalexandris, Miltiadis; Waluschka, Eugene
2003-01-01
A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.
Supercritical extraction of oleaginous: parametric sensitivity analysis
Directory of Open Access Journals (Sweden)
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
When decision heuristics and science collide.
Yu, Erica C; Sprenger, Amber M; Thomas, Rick P; Dougherty, Michael R
2014-04-01
The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of "researcher degrees of freedom." This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on p values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of doing science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework.
The affect heuristic in occupational safety.
Savadori, Lucia; Caovilla, Jessica; Zaniboni, Sara; Fraccaroli, Franco
2015-07-08
The affect heuristic is a rule of thumb according to which, in the process of making a judgment or decision, people use affect as a cue. If a stimulus elicits positive affect then risks associated to that stimulus are viewed as low and benefits as high; conversely, if the stimulus elicits negative affect, then risks are perceived as high and benefits as low. The basic tenet of this study is that affect heuristic guides worker's judgment and decision making in a risk situation. The more the worker likes her/his organization the less she/he will perceive the risks as high. A sample of 115 employers and 65 employees working in small family agricultural businesses completed a questionnaire measuring perceived safety costs, psychological safety climate, affective commitment and safety compliance. A multi-sample structural analysis supported the thesis that safety compliance can be explained through an affect-based heuristic reasoning, but only for employers. Positive affective commitment towards their family business reduced employers' compliance with safety procedures by increasing the perceived cost of implementing them.
An addendum on sensitivity analysis of the optimal assignment
Volgenant, A.
2006-01-01
We point out that sensitivity results for the linear assignment problem can be produced by a shortest path based approach in a straightforward manner and as efficient as finding an optimal solution. Keywords: Assignment; Sensitivity analysis
New insights into diversification of hyper-heuristics.
Ren, Zhilei; Jiang, He; Xuan, Jifeng; Hu, Yan; Luo, Zhongxuan
2014-10-01
There has been a growing research trend of applying hyper-heuristics for problem solving, due to their ability of balancing the intensification and the diversification with low level heuristics. Traditionally, the diversification mechanism is mostly realized by perturbing the incumbent solutions to escape from local optima. In this paper, we report our attempt toward providing a new diversification mechanism, which is based on the concept of instance perturbation. In contrast to existing approaches, the proposed mechanism achieves the diversification by perturbing the instance under solving, rather than the solutions. To tackle the challenge of incorporating instance perturbation into hyper-heuristics, we also design a new hyper-heuristic framework HIP-HOP (recursive acronym of HIP-HOP is an instance perturbation-based hyper-heuristic optimization procedure), which employs a grammar guided high level strategy to manipulate the low level heuristics. With the expressive power of the grammar, the constraints, such as the feasibility of the output solution could be easily satisfied. Numerical results and statistical tests over both the Ising spin glass problem and the p -median problem instances show that HIP-HOP is able to achieve promising performances. Furthermore, runtime distribution analysis reveals that, although being relatively slow at the beginning, HIP-HOP is able to achieve competitive solutions once given sufficient time.
Development of Heuristic Bias Detection in Elementary School
De Neys, Wim; Feremans, Vicky
2013-01-01
Although human reasoning is often biased by intuitive heuristics, recent studies have shown that adults and adolescents detect the biased nature of their judgments. The present study focused on the development of this critical bias sensitivity by examining the detection skills of young children in elementary school. Third and 6th graders were…
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
Sensitivity analysis of textural parameters for vertebroplasty
Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.
2002-05-01
Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2
Derivative based sensitivity analysis of gamma index.
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD', δD") between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many
MOVES2010a regional level sensitivity analysis
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
Energy Technology Data Exchange (ETDEWEB)
Gonzalez C, J.; Martin del Campo M, C.; Francois L, J.L. [Facultad de Ingenieria, UNAM, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico)
2004-07-01
The objective of this work is to verify the validity of the heuristic rules that have been applied in the processes of radial optimization of fuel cells. It was examined the rule with respect to the accommodation of fuel in the corners of the cell and it became special attention on the influence of the position and concentration of those pellets with gadolinium in the reactivity of the cell and the safety parameters. The evaluation behaved on designed cells violating the heuristic rules. For both cases the cells were analyzed between infinite using the HELIOS code. Additionally, for the second case, it was behaved a stage more exhaustive where it was used one of the studied cells that it completed those safety parameters and of reactivity to generate the design of an assemble that was used to calculate with CM-PRESTO the behavior of the nucleus during three operation cycles. (Author)
The impact of choice context on consumers' choice heuristics
DEFF Research Database (Denmark)
Mueller Loose, Simone; Scholderer, Joachim; Corsi, Armando M.
2012-01-01
Context effects in choice settings have received recent attention but little is known about the impact of context on choice consistency and the extent to which consumers apply choice heuristics. The sequence of alternatives in a choice set is examined here as one specific context effect. We compare...... how a change from a typical price order to a sensory order in wine menus affects consumer choice. We use pre-specified latent heuristic classes to analyse the existence of different choice processes, which begins to untangle the ‘black box’ of how consumers choose. Our findings indicate...... that in the absence of price order, consumers are less price-sensitive, pay more attention to visually salient cues, are less consistent in their choices and employ other simple choice heuristics more frequently than price. Implications for consumer research, marketing and consumer policy are discussed....
GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis
Kennedy, Christopher Brandon
model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.
Adkins, Daniel E.; McClay, Joseph L.; VUNCK, SARAH A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; SOUZA,RENAN P. DE; Crowley, James J.; Sullivan, Patrick F; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2013-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphe...
A discourse on sensitivity analysis for discretely-modeled structures
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
NPV Sensitivity Analysis: A Dynamic Excel Approach
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Conflict and Bias in Heuristic Judgment
Bhatia, Sudeep
2017-01-01
Conflict has been hypothesized to play a key role in recruiting deliberative processing in reasoning and judgment tasks. This claim suggests that changing the task so as to add incorrect heuristic responses that conflict with existing heuristic responses can make individuals less likely to respond heuristically and can increase response accuracy.…
An Effective Exercise for Teaching Cognitive Heuristics
Swinkels, Alan
2003-01-01
This article describes a brief heuristics demonstration and offers suggestions for personalizing examples of heuristics by making them relevant to students. Students complete a handout asking for 4 judgments illustrative of such heuristics. The decisions are cast in the context of students' daily lives at their particular university. After the…
Heuristics for Multidimensional Packing Problems
DEFF Research Database (Denmark)
Egeblad, Jens
for a three-dimensional knapsack packing problem involving furniture is presented in the fourth paper. The heuristic is based on a variety of techniques including tree-search, wall-building, and sequential placement. The solution process includes considerations regarding stability and load bearing strength...
Heuristic Biases in Mathematical Reasoning
Inglis, Matthew; Simpson, Adrian
2005-01-01
In this paper we briefly describe the dual process account of reasoning, and explain the role of heuristic biases in human thought. Concentrating on the so-called matching bias effect, we describe a piece of research that indicates a correlation between success at advanced level mathematics and an ability to override innate and misleading…
Advances in heuristic signal processing and applications
Chatterjee, Amitava; Siarry, Patrick
2013-01-01
There have been significant developments in the design and application of algorithms for both one-dimensional signal processing and multidimensional signal processing, namely image and video processing, with the recent focus changing from a step-by-step procedure of designing the algorithm first and following up with in-depth analysis and performance improvement to instead applying heuristic-based methods to solve signal-processing problems. In this book the contributing authors demonstrate both general-purpose algorithms and those aimed at solving specialized application problems, with a spec
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
Sensitivity analysis is a tool used in the assessment of a model's performance. This study examined the application of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study area is Effurun, Uvwie Local Government Area of Delta State of Nigeria. In order to ...
Hermawati, Setia; Lawson, Glyn
2016-01-01
Heuristics evaluation is frequently employed to evaluate usability. While general heuristics are suitable to evaluate most user interfaces, there is still a need to establish heuristics for specific domains to ensure that their specific usability issues are identified. This paper presents a comprehensive review of 70 studies related to usability heuristics for specific domains. The aim of this paper is to review the processes that were applied to establish heuristics in specific domains and i...
Memorability in Context: A Heuristic Story.
Geurten, Marie; Meulemans, Thierry; Willems, Sylvie
2015-01-01
We examined children's ability to employ a metacognitive heuristic based on memorability expectations to reduce false recognitions, and explored whether these expectations depend on the context in which the items are presented. Specifically, 4-, 6-, and 9-year-old children were presented with high-, medium-, and low-memorability words, either mixed together (Experiment 1) or separated into two different lists (Experiment 2). Results revealed that only children with a higher level of executive functioning (9-year-olds) used the memorability-based heuristic when all types of items were presented within the same list. However, all children, regardless of age or executive level, implemented the metacognitive rule when high- and low-memorability words were presented in two separate lists. Moreover, the results of Experiment 2 showed that participants processed medium-memorability words more conservatively when they were presented in a low- than in a high-memorability list, suggesting that children's memorability expectations are sensitive to list-context effects.
Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift
DEFF Research Database (Denmark)
Lehre, Per Kristian; Witt, Carsten
2014-01-01
Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target...
Nicola, V.F.; Zaburnenko, T.S.
2006-01-01
In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overï¬‚ow in feed-forward networks. This heuristic attempts to approximate the â€œoptimalï¿½ï¿½? state-dependent change of measure without the need for difficult analysis or costly
Familiarity and recollection in heuristic decision making.
Schwikert, Shane R; Curran, Tim
2014-12-01
Heuristics involve the ability to utilize memory to make quick judgments by exploiting fundamental cognitive abilities. In the current study we investigated the memory processes that contribute to the recognition heuristic and the fluency heuristic, which are both presumed to capitalize on the byproducts of memory to make quick decisions. In Experiment 1, we used a city-size comparison task while recording event-related potentials (ERPs) to investigate the potential contributions of familiarity and recollection to the 2 heuristics. ERPs were markedly different for recognition heuristic-based decisions and fluency heuristic-based decisions, suggesting a role for familiarity in the recognition heuristic and recollection in the fluency heuristic. In Experiment 2, we coupled the same city-size comparison task with measures of subjective preexperimental memory for each stimulus in the task. Although previous literature suggests the fluency heuristic relies on recognition speed alone, our results suggest differential contributions of recognition speed and recollected knowledge to these decisions, whereas the recognition heuristic relies on familiarity. Based on these results, we created a new theoretical framework that explains decisions attributed to both heuristics based on the underlying memory associated with the choice options. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Ranking of Storm Water Harvesting Sites Using Heuristic and Non-Heuristic Weighing Approaches
Directory of Open Access Journals (Sweden)
Shray Pathak
2017-09-01
Full Text Available Conservation of water is essential as climate change coupled with land use changes influence the distribution of water availability. Stormwater harvesting (SWH is a widely used conservation measure, which reduces pressure on fresh water resources. However, determining the availability of stormwater and identifying the suitable sites for SWH require consideration of various socio-economic and technical factors. Earlier studies use demand, ratio of runoff to demand and weighted demand distance, as the screening criteria. In this study, a Geographic Information System (GIS based screening methodology is adopted for identifying potential suitable SWH sites in urban areas as a first pass, and then a detailed study is done by applying suitability criteria. Initially, potential hotspots are identified by a concept of accumulated catchments and later the sites are screened and ranked using various screening parameters namely demand, ratio of runoff to demand and weighted demand distance. During this process, the opinion of experts for finalizing the suitable SWH sites brings subjectivity in the methodology. To obviate this, heuristic (Saaty Analytic hierarchy process (AHP and non-heuristic approaches (Entropy weight, and Principal Component Analysis (PCA weighing techniques are adapted for allotting weights to the parameters and applied in the ranking of SWH sites in Melbourne, Australia and Dehradun, India. It is observed that heuristic approach is not effective for the study area as it was affected by the subjectivity in the expert opinion. Results obtained by non-heuristic approach come out to be in a good agreement with the sites finalized for SWH by the water planners of the study area. Hence, the proposed ranking methodology has the potential for application in decision making of suitable storm water harvesting sites.
Sensitivity Analysis Based on Markovian Integration by Parts Formula
Directory of Open Access Journals (Sweden)
Yongsheng Hang
2017-10-01
Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.
Boundary formulations for sensitivity analysis without matrix derivatives
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
Sensitivity analysis on ultimate strength of aluminium stiffened panels
DEFF Research Database (Denmark)
Rigo, P.; Sarghiuta, R.; Estefen, S.
2003-01-01
This paper presents the results of an extensive sensitivity analysis carried out by the Committee III.1 "Ultimate Strength" of ISSC?2003 in the framework of a benchmark on the ultimate strength of aluminium stiffened panels. Previously, different benchmarks were presented by ISSC committees...... stiffened aluminium panels (including extruded profiles). Main objectives are to compare codes/models and to perfom quantitative sensitivity analysis of the ultimate strength of a welded aluminium panel on various parameters (typically the heat-affected zone). Two phases were planned. In Phase A, alle...... of different parameters (sensitivity analysis)...
Special relativity a heuristic approach
Hassani, Sadri
2017-01-01
Special Relativity: A Heuristic Approach provides a qualitative exposition of relativity theory on the basis of the constancy of the speed of light. Using Einstein's signal velocity as the defining idea for the notion of simultaneity and the fact that the speed of light is independent of the motion of its source, chapters delve into a qualitative exposition of the relativity of time and length, discuss the time dilation formula using the standard light clock, explore the Minkowski four-dimensional space-time distance based on how the time dilation formula is derived, and define the components of the two-dimensional space-time velocity, amongst other topics. Provides a heuristic derivation of the Minkowski distance formula Uses relativistic photography to see Lorentz transformation and vector algebra manipulation in action Includes worked examples to elucidate and complement the topic being discussed Written in a very accessible style
Advanced Fuel Cycle Economic Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Sensitivity Analysis Of Evanescent Fiber Optic Sensors
Wang, Jinyu; Christensen, Douglas A.; Brynda, E.; Andrade, Joseph D.; Ives, Jeffrey T.; Lin, Jinnan
1989-06-01
Evanescent fiber optic sensors are being developed for remote in situ immunoassay. The single reflection total internal reflection fluorescence (TIRF) geometry can serve as a well-defined model against which evanescent waveguide devices can be compared and evaluated. This paper addresses the problem of optimizing the sensitivity of an evanescent fiber optic sensor (EFOS). Two aspects are discussed: (1) the modes of exciting laser light in the fiber have an effect on the sensor efficiency and signal-to-noise ratio; (2) in a fiber biosensor, there is generally a protein layer attached to the core surface; the thickness of the layer is at least 5nm. If the refractive index of the protein layer can be made equal to the refractive index of the core, we can get a new fiber waveguide in which the core also contains the protein layer. The fluorescent emission sources are thus inside the core region and generate the highest signal collection efficiency. We also discuss the situation when the refractive index of the protein layer is larger or smaller than that of the optical fiber core.
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Stochastic sensitivity analysis using HDMR and score function
Indian Academy of Sciences (India)
Abstract. Probabilistic sensitivities provide an important insight in reliability analysis and often crucial towards understanding the physical behaviour underlying failure and modifying the design to mitigate and manage risk. This article presents a new computational approach for calculating stochastic sensitivities of ...
Parametric sensitivity analysis of a mathematical model of HIV ...
African Journals Online (AJOL)
Methods: In this bio-medical study, we used the tool of a sensitivity analysis to select the most sensitive model parameters of this multi-parameter system. This method is based on a variation of a model parameter one-at-a-time when other model parameters are fixed. Results: We have found that the maximum proliferation ...
Global and local sensitivity analysis methods for a physical system
Energy Technology Data Exchange (ETDEWEB)
Morio, Jerome, E-mail: jerome.morio@onera.fr [Onera-The French Aerospace Lab, F-91761, Palaiseau Cedex (France)
2011-11-15
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.
Sensitivity Analysis of a Mesoscale Moisture Model.
1981-03-01
Sciences Analysis Activity National Scinece Foundation ATTN: DRXSY-MP 1800 G Street, N.W. APG, MD 21005 Washington, DC 20550 Commander Commanding...Division Technical Library AFGL/LY Chemical Systems Laboratory Hanscom AFB, MA 01731 Aberdeen Proving Ground, MD 21010 The Environmental Research...Army Electronics R&D Command Environmental Protection Agency ATTN: DELCS-S Meteorology Laboratory, MD 80 Fort Monmouth, NJ 07703 Rsch Triangle Park
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Dispersion sensitivity analysis & consistency improvement of APFSDS
Directory of Open Access Journals (Sweden)
Sangeeta Sharma Panda
2017-08-01
In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.
Conceptual and Action Heuristics: Tools for the Evaluator.
McClintock, Charles
1987-01-01
Program theory can be used to improve programs and policies. This article describes a set of techniques for complicating and simplifying program theory, referred to as conceptual and action heuristics. Methods such as analyzing metaphors, clarifying concepts, mapping, component assessment, causal modeling, and decision analysis are discussed. (JAZ)
Bennun, Leonardo
2015-01-01
Improvements in performance and approval obtained by first year engineering students from University of Concepcion, Chile, were studied, once a virtual didactic model of multiple-choice exam, was implemented. This virtual learning resource was implemented in the Web ARCO platform and allows training, by facing test models comparable in both time and difficulty to those that they will have to solve during the course. It also provides a feedback mechanism for both: 1) The students, since they can verify the level of their knowledge. Once they have finished the simulations, they can access a complete problem-solving heuristic report of each problem; 2) The teachers, since they can obtain information about the habits of the students in their strategies of preparation; and they also can diagnose the weaknesses of the students prior to the exam. This study indicates how this kind of preparation generates substantial improvements on the approval rates by allowing the students: 1) A more structured and oriented syste...
Using automatic differentiation in sensitivity analysis of nuclear simulatoin models.
Energy Technology Data Exchange (ETDEWEB)
Alexe, M.; Roderick, O.; Anitescu, M.; Utke, J.; Fanning, T.; Hovland, P.; Virginia Tech.
2010-01-01
Sensitivity analysis is an important tool in the study of nuclear systems. In our recent work, we introduced a hybrid method that combines sampling techniques with first-order sensitivity analysis to approximate the effects of uncertainty in parameters of a nuclear reactor simulation model. For elementary examples, the approach offers a substantial advantage (in precision, computational efficiency, or both) over classical methods of uncertainty quantification.
Robust Sensitivity Analysis of the Optimal Value of Linear Programming
Xu, Guanglin; Burer, Samuel
2015-01-01
We propose a framework for sensitivity analysis of linear programs (LPs) in minimization form, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case linear optimization and two-stage adaptive optimization. We define the minimum (best-case) and maximum...
A Geographical Heuristic Routing Protocol for VANETs
Directory of Open Access Journals (Sweden)
Luis Urquiza-Aguiar
2016-09-01
Full Text Available Vehicular ad hoc networks (VANETs leverage the communication system of Intelligent Transportation Systems (ITS. Recently, Delay-Tolerant Network (DTN routing protocols have increased their popularity among the research community for being used in non-safety VANET applications and services like traffic reporting. Vehicular DTN protocols use geographical and local information to make forwarding decisions. However, current proposals only consider the selection of the best candidate based on a local-search. In this paper, we propose a generic Geographical Heuristic Routing (GHR protocol that can be applied to any DTN geographical routing protocol that makes forwarding decisions hop by hop. GHR includes in its operation adaptations simulated annealing and Tabu-search meta-heuristics, which have largely been used to improve local-search results in discrete optimization. We include a complete performance evaluation of GHR in a multi-hop VANET simulation scenario for a reporting service. Our study analyzes all of the meaningful configurations of GHR and offers a statistical analysis of our findings by means of MANOVA tests. Our results indicate that the use of a Tabu list contributes to improving the packet delivery ratio by around 5% to 10%. Moreover, if Tabu is used, then the simulated annealing routing strategy gets a better performance than the selection of the best node used with carry and forwarding (default operation.
Sensitivity analysis and its application for dynamic improvement
Indian Academy of Sciences (India)
dЕ0a └ 0bЖ1. dPj. З 2. dЕ0a └ 0bЖ2. dPj. З┴┴┴З n. dЕ0a └ 0bЖn. dPj. X. Е7Ж. Figure 4. ODS sensitivity analysis on laser beam printer. Figure 5. FRF of laser beam printer. Figure 6. Vibration of chart driving motor. Figure 7. ODS sensitivity map due to mass modification. Sensitivity analysis for dynamic improvement.
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Jensen, Rasmus Lund; Maagaard, Steffen; Østergård, Torben
2017-01-01
Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...... simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information
Directory of Open Access Journals (Sweden)
Chuanqi Li
2014-11-01
Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.
An Ant Colony based Hyper-Heuristic Approach for the Set Covering Problem
Directory of Open Access Journals (Sweden)
Alexandre Silvestre FERREIRA
2015-12-01
Full Text Available The Set Covering Problem (SCP is a NP-hard combinatorial optimization problem that is challenging for meta-heuristic algorithms. In the optimization literature, several approaches using meta-heuristics have been developed to tackle the SCP and the quality of the results provided by these approaches highly depends on customized operators that demands high effort from researchers and practitioners. In order to alleviate the complexity of designing metaheuristics, a methodology called hyper-heuristic has emerged as a possible solution. A hyper-heuristic is capable of dynamically selecting simple low-level heuristics accordingly to their performance, alleviating the design complexity of the problem solver and obtaining satisfactory results at the same time. In a previous study, we proposed a hyper-heuristic approach based on Ant Colony Optimization (ACO-HH for solving the SCP. This paper extends our previous efforts, presenting better results and a deeper analysis of ACO-HH parameters and behavior, specially about the selection of low-level heuristics. The paper also presents a comparison with an ACO meta-heuristic customized for the SCP.
A single cognitive heuristic process meets the complexity of domain-specific moral heuristics.
Dubljević, Veljko; Racine, Eric
2014-10-01
The inherence heuristic (a) offers modest insights into the complex nature of both the is-ought tension in moral reasoning and moral reasoning per se, and (b) does not reflect the complexity of domain-specific moral heuristics. Formal and general in nature, we contextualize the process described as "inherence heuristic" in a web of domain-specific heuristics (e.g., agent specific; action specific; consequences specific).
Gigerenzer, Gerd
2009-01-01
In their comment on Marewski et al. (good judgments do not require complex cognition, 2009) Evans and Over (heuristic thinking and human intelligence: a commentary on Marewski, Gaissmaier and Gigerenzer, 2009) conjectured that heuristics can often lead to biases and are not error free. This is a most surprising critique. The computational models of heuristics we have tested allow for quantitative predictions of how many errors a given heuristic will make, and we and others have measured the amount of error by analysis, computer simulation, and experiment. This is clear progress over simply giving heuristics labels, such as availability, that do not allow for quantitative comparisons of errors. Evans and Over argue that the reason people rely on heuristics is the accuracy-effort trade-off. However, the comparison between heuristics and more effortful strategies, such as multiple regression, has shown that there are many situations in which a heuristic is more accurate with less effort. Finally, we do not see how the fast and frugal heuristics program could benefit from a dual-process framework unless the dual-process framework is made more precise. Instead, the dual-process framework could benefit if its two “black boxes” (Type 1 and Type 2 processes) were substituted by computational models of both heuristics and other processes. PMID:19784854
Heuristics and bias in homeopathy.
Souter, K
2006-10-01
The practice of Homeopathy ought to be strictly logical. In the Organon Samuel Hahnemann gives the impression that the unprejudiced observer should be able to follow an algorithmic route to the simillimum in every case. Judgement and Decision Research, however, indicates that when people grapple with complex systems like homeopathy they are more likely to use heuristics or empirical rules to help them reach a solution. Thus Hahnemann's concept of the unprejudiced observer is virtually impossible to attain. There is inevitable bias in both case-taking and remedy selection. Understanding the types of bias may enable the practitioner to reduce his/her own bias.
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...... results for the period before thermal penetration (tp) has occurred. The analysis is also done for all combinations of two parameters in order to find the combination with the largest effect. The Sobol total for pairs had the highest value for the combination of energy release rate and area of opening...
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Analysis of implicit and explicit lattice sensitivities using DRAGON
Energy Technology Data Exchange (ETDEWEB)
Ball, M.R., E-mail: ballmr@mcmaster.ca; Novog, D.R., E-mail: novog@mcmaster.ca; Luxat, J.C., E-mail: luxatj@mcmaster.ca
2013-12-15
Highlights: • We developed a way to propagate point-wise perturbations using only WIMS-D4 multigroup data. • The method inherently includes treatment of multi-group implicit sensitivities. • We compared our calculated sensitivities to an industry standard tool (TSUNAMI-1D). • In general, our results agreed well with TSUNAMI-1D. - Abstract: Deterministic lattice physics transport calculations are used extensively within the context of operational and safety analysis of nuclear power plants. As such the sensitivity and uncertainty in the evaluated nuclear data used to predict neutronic interactions and other key transport phenomena are critical topics for research. Sensitivity analysis of nuclear systems with respect to fundamental nuclear data using multi-energy-group discretization is complicated by the dilution dependency of multi-group macroscopic cross-sections as a result of resonance self-shielding. It has become common to group sensitivities into implicit and explicit effects to aid in the understanding of the nature of the sensitivities involved in the calculations, however the overall sensitivity is an integral of these effects. Explicit effects stem from perturbations performed for a specific nuclear data for a given isotope and at a specific energy, and their direct impact on the end figure of merit. Implicit effects stem from resonance self-shielding effects and can change the nature of their own sensitivities at other energies, or that for other reactions or even other isotopes. Quantification of the implicit sensitivity component involves some manner of treatment of resonance parameters in a way that is self-consistent with perturbations occurring in associated multi-group cross-sections. A procedure for assessing these implicit effects is described in the context of the Bondarenko method of self-shielding and implemented using a WIMS-D4 multi-group nuclear library and the lattice solver DRAGON. The resulting sensitivity results were compared
Energy Technology Data Exchange (ETDEWEB)
Ortiz, J. J.; Castillo, J. A.; Montes, J. L.; Hernandez, J. L., E-mail: juanjose.ortiz@inin.gob.m [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2009-10-15
This work approaches the study of one of the heuristic rules of fuel cells design for boiling water nuclear reactors. This rule requires that the minor uranium enrichment is placed in the corners of the fuel cell. Also the search greedy is applied for the fuel cells design where explicitly does not take in count this rule, allowing the possibility to place any uranium enrichment with the condition that it does not contain gadolinium. Results are shown in the quality of the obtained cell by search greedy when it considers the rule and when not. The cell quality is measured with the value of the power pick factor obtained, as well as of the neutrons multiplication factor in an infinite medium. Cells were analyzed with 1 and 2 gadolinium concentrations low operation conditions at 120% of the nominal power of the reactors of the nuclear power plant of Laguna Verde. The results show that not to consider the rule in cells with a single gadolinium concentration, it has as repercussion that the greedy search has a minor performance. On the other hand with cells of two gadolinium concentrations, the performance of the greedy search was better. (Author)
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
How the twain can meet: Prospect theory and models of heuristics in risky choice.
Pachur, Thorsten; Suter, Renata S; Hertwig, Ralph
2017-03-01
Two influential approaches to modeling choice between risky options are algebraic models (which focus on predicting the overt decisions) and models of heuristics (which are also concerned with capturing the underlying cognitive process). Because they rest on fundamentally different assumptions and algorithms, the two approaches are usually treated as antithetical, or even incommensurable. Drawing on cumulative prospect theory (CPT; Tversky & Kahneman, 1992) as the currently most influential instance of a descriptive algebraic model, we demonstrate how the two modeling traditions can be linked. CPT's algebraic functions characterize choices in terms of psychophysical (diminishing sensitivity to probabilities and outcomes) as well as psychological (risk aversion and loss aversion) constructs. Models of heuristics characterize choices as rooted in simple information-processing principles such as lexicographic and limited search. In computer simulations, we estimated CPT's parameters for choices produced by various heuristics. The resulting CPT parameter profiles portray each of the choice-generating heuristics in psychologically meaningful ways-capturing, for instance, differences in how the heuristics process probability information. Furthermore, CPT parameters can reflect a key property of many heuristics, lexicographic search, and track the environment-dependent behavior of heuristics. Finally, we show, both in an empirical and a model recovery study, how CPT parameter profiles can be used to detect the operation of heuristics. We also address the limits of CPT's ability to capture choices produced by heuristics. Our results highlight an untapped potential of CPT as a measurement tool to characterize the information processing underlying risky choice. Copyright © 2017 Elsevier Inc. All rights reserved.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Sensitivity analysis of a sound absorption model with correlated inputs
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Evaluating Heuristics for Planning Effective and Efficient Inspections
Shull, Forrest J.; Seaman, Carolyn B.; Diep, Madeline M.; Feldmann, Raimund L.; Godfrey, Sara H.; Regardie, Myrna
2010-01-01
A significant body of knowledge concerning software inspection practice indicates that the value of inspections varies widely both within and across organizations. Inspection effectiveness and efficiency can be measured in numerous ways, and may be affected by a variety of factors such as Inspection planning, the type of software, the developing organization, and many others. In the early 1990's, NASA formulated heuristics for inspection planning based on best practices and early NASA inspection data. Over the intervening years, the body of data from NASA inspections has grown. This paper describes a multi-faceted exploratory analysis performed on this · data to elicit lessons learned in general about conducting inspections and to recommend improvements to the existing heuristics. The contributions of our results include support for modifying some of the original inspection heuristics (e.g. Increasing the recommended page rate), evidence that Inspection planners must choose between efficiency and effectiveness, as a good tradeoff between them may not exist, and Identification of small subsets of inspections for which new inspection heuristics are needed. Most Importantly, this work illustrates the value of collecting rich data on software Inspections, and using it to gain insight into, and Improve, inspection practice.
Cooperative heuristic multi-agent planning
De Weerdt, M.M.; Tonino, J.F.M.; Witteveen, C.
2001-01-01
In this paper we will use the framework to study cooperative heuristic multi-agent planning. During the construction of their plans, the agents use a heuristic function inspired by the FF planner (l3l). At any time in the process of planning the agents may exchange available resources, or they may
Effective Heuristics for New Venture Formation
Kraaijenbrink, Jeroen
2010-01-01
Entrepreneurs are often under time pressure and may only have a short window of opportunity to launch their new venture. This means they often have no time for rational analytical decisions and rather rely on heuristics. Past research on entrepreneurial heuristics has primarily focused on predictive
A Heuristic Approach to Scheduling University Timetables.
Loo, E. H.; And Others
1986-01-01
Categories of facilities utilization and scheduling requirements to be considered when using a heuristic approach to timetabling are described together with a nine-step algorithm and the computerized timetabling system, Timetable Schedules System (TTS); TTS utilizes heuristic approach. An example demonstrating use of TTS and a program flowchart…
Modeling reproductive decisions with simple heuristics
Directory of Open Access Journals (Sweden)
Peter Todd
2013-10-01
Full Text Available BACKGROUND Many of the reproductive decisions that humans make happen without much planning or forethought, arising instead through the use of simple choice rules or heuristics that involve relatively little information and processing. Nonetheless, these heuristic-guided decisions are typically beneficial, owing to humans' ecological rationality - the evolved fit between our constrained decision mechanisms and the adaptive problems we face. OBJECTIVE This paper reviews research on the ecological rationality of human decision making in the domain of reproduction, showing how fertility-related decisions are commonly made using various simple heuristics matched to the structure of the environment in which they are applied, rather than being made with information-hungry mechanisms based on optimization or rational economic choice. METHODS First, heuristics for sequential mate search are covered; these heuristics determine when to stop the process of mate search by deciding that a good-enough mate who is also mutually interested has been found, using a process of aspiration-level setting and assessing. These models are tested via computer simulation and comparison to demographic age-at-first-marriage data. Next, a heuristic process of feature-based mate comparison and choice is discussed, in which mate choices are determined by a simple process of feature-matching with relaxing standards over time. Parental investment heuristics used to divide resources among offspring are summarized. Finally, methods for testing the use of such mate choice heuristics in a specific population over time are then described.
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Detecting Sensitive Analysis of Inside Defect in Shearography
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyung Suk; Kang, Ki Soo; Yun, Heong Suk [Dept. of Mechenical Design Engineering, Chosun University, Gwangju (Korea, Republic of); Choi, Tae Ho [LARC, Chosun University, Gwangju (Korea, Republic of)
2005-11-15
Shearography is one of optical methods that has applied to nondestructive testing (NDT) and strain/stress analysis. The technique has the merit of the directly measuring the first derivative of displacement, sensitivity of which can be adjusted by the handling of optical component in interferometer. However, the adjustment is related to the error in the quantitative evaluation of a defect. In this paper, the technique for the quantitative evaluation of a defect in Shearography is proposed by theoretical foundation and experimental proof. The effective factors for quantitative analysis are discussed in details and the concept of critical shearing amount and critical loading amount are introduced. The detecting sensitivity of Shearography is analyzed.
Direct Sensitivity Analysis of the DC-to-DC Converters
Directory of Open Access Journals (Sweden)
Elena Niculescu
2009-05-01
Full Text Available The mathematical principle of the directsensitivity analysis of the dynamic systems and itsapplication to the DC-to-DC PWM converters arepresented. The model of the dynamic system associatedto the PWM Sepic converter with parasitic includedand continuous conduction mode (CCM, and coupledinductors was used in this study. The modelling of theconverter and the state sensitivity analysis with respectto some parameters of the converter have beenperformed with MATLAB environment. The algorithmcarried out for computing the state sensitivity functionsof converter can be applied to other configurations ofDC-to-DC PWM converters, for the two operatingmodes (CCM and DCM, with parasitic included andwith coupled or separate inductors, regardless ofsystem order.
Stable locality sensitive discriminant analysis for image recognition.
Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang
2014-06-01
Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.
Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities
Directory of Open Access Journals (Sweden)
Thi Thanh Huyen Nguyen
2015-11-01
Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.
Global Sensitivity Analysis of the WASIM hydrological model using VARS
Rehan Anis, Muhammad; Haghnegahdar, Amin; Razavi, Saman; Wheater, Howard
2017-04-01
Sensitivity analysis (SA) aims to identify the key parameters that affect model performance and it plays an important role in model understanding, calibration, and uncertainty quantification. The increasing complexity of physically-based hydrological models warrants application of comprehensive SA methods for an improved and effective application of hydrological modeling. This study aims to provide a comprehensive sensitivity assessment of WaSiM (Richards version 9.03) hydrological model using a novel and efficient global SA technique Variogram Analysis of Response Surface (VARS), at the experimental Schaefertal catchment (1.44 Km2) in lower Harz Mountains Germany. WaSiM is a distributed hydrological model that can simulate surface and sub-surface flows at various spatial and temporal scales. VARS is a variogram-based framework for global SA that can characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. Our preliminary SA results show that simulated streamflows in WaSim-ETH are most sensitive to precipitation correction factor followed by parameters related to the snowmelt and flow density. We aim to expand this sensitivity assessment by conducting a more comprehensive global SA with more than 70 parameters from various model components corresponding to interception, infiltration, evapotranspiration, snowmelt, and runoff. This will enable us to provide an enhanced understanding of WaSiM structure and identify dominant controls of its behavior that can be utilized to reduce model prediction uncertainty and reduce parameters needed for calibration.
Structural Optimization of Slender Robot Arm Based on Sensitivity Analysis
Directory of Open Access Journals (Sweden)
Zhong Luo
2012-01-01
Full Text Available An effective structural optimization method based on a sensitivity analysis is proposed to optimize the variable section of a slender robot arm. The structure mechanism and the operating principle of a polishing robot are introduced firstly, and its stiffness model is established. Then, a design of sensitivity analysis method and a sequential linear programming (SLP strategy are developed. At the beginning of the optimization, the design sensitivity analysis method is applied to select the sensitive design variables which can make the optimized results more efficient and accurate. In addition, it can also be used to determine the scale of moving step which will improve the convergency during the optimization process. The design sensitivities are calculated using the finite difference method. The search for the final optimal structure is performed using the SLP method. Simulation results show that the proposed structure optimization method is effective in enhancing the stiffness of the robot arm regardless of the robot arm suffering either a constant force or variable forces.
Directory of Open Access Journals (Sweden)
Fanrong Kong
2017-09-01
Full Text Available To alleviate the emission of greenhouse gas and the dependence on fossil fuel, Plug-in Hybrid Electrical Vehicles (PHEVs have gained an increasing popularity in current decades. Due to the fluctuating electricity prices in the power market, a charging schedule is very influential to driving cost. Although the next-day electricity prices can be obtained in a day-ahead power market, a driving plan is not easily made in advance. Although PHEV owners can input a next-day plan into a charging system, e.g., aggregators, day-ahead, it is a very trivial task to do everyday. Moreover, the driving plan may not be very accurate. To address this problem, in this paper, we analyze energy demands according to a PHEV owner’s historical driving records and build a personalized statistic driving model. Based on the model and the electricity spot prices, a rolling optimization strategy is proposed to help make a charging decision in the current time slot. On one hand, by employing a heuristic algorithm, the schedule is made according to the situations in the following time slots. On the other hand, however, after the current time slot, the schedule will be remade according to the next tens of time slots. Hence, the schedule is made by a dynamic rolling optimization, but it only decides the charging decision in the current time slot. In this way, the fluctuation of electricity prices and driving routine are both involved in the scheduling. Moreover, it is not necessary for PHEV owners to input a day-ahead driving plan. By the optimization simulation, the results demonstrate that the proposed method is feasible to help owners save charging costs and also meet requirements for driving.
Interactive Building Design Space Exploration Using Regionalized Sensitivity Analysis
DEFF Research Database (Denmark)
Jensen, Rasmus Lund; Maagaard, Steffen; Østergård, Torben
2017-01-01
in combination with the interactive parallel coordinate plot (PCP). The latter is an effective tool to explore stochastic simulations and to find high-performing building designs. The proposed methods help decision makers to focus their attention to the most important design parameters when exploring......Monte Carlo simulations combined with regionalized sensitivity analysis provide the means to explore a vast, multivariate design space in building design. Typically, sensitivity analysis shows how the variability of model output relates to the uncertainties in models inputs. This reveals which...... simulation inputs are most important and which have negligible influence on the model output. Popular sensitivity methods include the Morris method, variance-based methods (e.g. Sobol’s), and regression methods (e.g. SRC). However, all these methods only address one output at a time, which makes it difficult...
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Multicriteria Evaluation and Sensitivity Analysis on Information Security
Syamsuddin, Irfan
2013-05-01
Information security plays a significant role in recent information society. Increasing number and impact of cyber attacks on information assets have resulted the increasing awareness among managers that attack on information is actually attack on organization itself. Unfortunately, particular model for information security evaluation for management levels is still not well defined. In this study, decision analysis based on Ternary Analytic Hierarchy Process (T-AHP) is proposed as a novel model to aid managers who responsible in making strategic evaluation related to information security issues. In addition, sensitivity analysis is applied to extend our analysis by using several "what-if" scenarios in order to measure the consistency of the final evaluation. Finally, we conclude that the final evaluation made by managers has a significant consistency shown by sensitivity analysis results.
Social biases determine spatiotemporal sparseness of ciliate mating heuristics.
Clark, Kevin B
2012-01-01
Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present
Lower extremity angle measurement with accelerometers - error and sensitivity analysis
Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.
1991-01-01
The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found
Analytical analysis of sensitivity of optical waveguide sensor | Verma ...
African Journals Online (AJOL)
In this article, we carried out analytical analysis of sensitivity and mode field of optical waveguide structure by use of effective index method. This structures as predicted have extended mode which could interact with the surrounding analyses in a much better way than the commonly used EWS.
Detecting Tipping points in Ecological Models with Sensitivity Analysis
Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap
2016-01-01
Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The
Detecting tipping points in ecological models with sensitivity analysis
ten Broeke, G.A.; van Voorn, G.A.K.; Kooi, B.W.; Molenaar, J.
2016-01-01
Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The
Sensitivity Analysis of a Horizontal Earth Electrode under Impulse ...
African Journals Online (AJOL)
This paper presents the sensitivity analysis of an earthing conductor under the influence of impulse current arising from a lightning stroke. The approach is based on the 2nd order finite difference time domain (FDTD). The earthing conductor is regarded as a lossy transmission line where it is divided into series connected ...
Sensitivity analysis of railpad parameters on vertical railway track dynamics
Oregui Echeverria-Berreyarza, M.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.
2016-01-01
This paper presents a sensitivity analysis of railpad parameters on vertical railway track dynamics, incorporating the nonlinear behavior of the fastening (i.e., downward forces compress the railpad whereas upward forces are resisted by the clamps). For this purpose, solid railpads, rail-railpad
Determination of temperature of moving surface by sensitivity analysis
Farhanieh, B
2002-01-01
In this paper sensitivity analysis in inverse problem solutions is employed to estimate the temperature of a moving surface. Moving finite element method is used for spatial discretization. Time derivatives are approximated using Crank-Nicklson method. The accuracy of the solution is assessed by simulation method. The convergence domain is investigated for the determination of the temperature of a solid fuel.
Sensitivity analysis of the Ohio phosphorus risk index
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design
Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno
2017-01-01
This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.
Sequence length variation, indel costs, and congruence in sensitivity analysis
DEFF Research Database (Denmark)
Aagesen, Lone; Petersen, Gitte; Seberg, Ole
2005-01-01
The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
The mathematical modelling of physiochemical interactions in the framework of industrial and environmental physics usually relies on an initial value problem which is described by a deterministic system of first order ordinary differential equations. In this paper, we considered a sensitivity analysis of studying the qualitative ...
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...
Design tradeoff studies and sensitivity analysis. Appendix B
Energy Technology Data Exchange (ETDEWEB)
1979-05-25
The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)
A NONLINEAR FEASIBILITY PROBLEM HEURISTIC
Directory of Open Access Journals (Sweden)
Sergio Drumond Ventura
2015-04-01
Full Text Available In this work we consider a region S ⊂ given by a finite number of nonlinear smooth convex inequalities and having nonempty interior. We assume a point x 0 is given, which is close in certain norm to the analytic center of S, and that a new nonlinear smooth convex inequality is added to those defining S (perturbed region. It is constructively shown how to obtain a shift of the right-hand side of this inequality such that the point x 0 is still close (in the same norm to the analytic center of this shifted region. Starting from this point and using the theoretical results shown, we develop a heuristic that allows us to obtain the approximate analytic center of the perturbed region. Then, we present a procedure to solve the problem of nonlinear feasibility. The procedure was implemented and we performed some numerical tests for the quadratic (random case.
Energy Technology Data Exchange (ETDEWEB)
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA
2017-05-01
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.
Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M
2013-11-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.
A comparative study of the A* heuristic search algorithm used to solve efficiently a puzzle game
Iordan, A. E.
2018-01-01
The puzzle game presented in this paper consists in polyhedra (prisms, pyramids or pyramidal frustums) which can be moved using the free available spaces. The problem requires to be found the minimum number of movements in order the game reaches to a goal configuration starting from an initial configuration. Because the problem is enough complex, the principal difficulty in solving it is given by dimension of search space, that leads to necessity of a heuristic search. The improving of the search method consists into determination of a strong estimation by the heuristic function which will guide the search process to the most promising side of the search tree. The comparative study is realized among Manhattan heuristic and the Hamming heuristic using A* search algorithm implemented in Java. This paper also presents the necessary stages in object oriented development of a software used to solve efficiently this puzzle game. The modelling of the software is achieved through specific UML diagrams representing the phases of analysis, design and implementation, the system thus being described in a clear and practical manner. With the purpose to confirm the theoretical results which demonstrates that Manhattan heuristic is more efficient was used space complexity criterion. The space complexity was measured by the number of generated nodes from the search tree, by the number of the expanded nodes and by the effective branching factor. From the experimental results obtained by using the Manhattan heuristic, improvements were observed regarding space complexity of A* algorithm versus Hamming heuristic.
A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.
Hart, Emma; Sim, Kevin
2016-01-01
We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.
Comparison of Heuristics for Inhibitory Rule Optimization
Alsolami, Fawaz
2014-09-13
Knowledge representation and extraction are very important tasks in data mining. In this work, we proposed a variety of rule-based greedy algorithms that able to obtain knowledge contained in a given dataset as a series of inhibitory rules containing an expression “attribute ≠ value” on the right-hand side. The main goal of this paper is to determine based on rule characteristics, rule length and coverage, whether the proposed rule heuristics are statistically significantly different or not; if so, we aim to identify the best performing rule heuristics for minimization of rule length and maximization of rule coverage. Friedman test with Nemenyi post-hoc are used to compare the greedy algorithms statistically against each other for length and coverage. The experiments are carried out on real datasets from UCI Machine Learning Repository. For leading heuristics, the constructed rules are compared with optimal ones obtained based on dynamic programming approach. The results seem to be promising for the best heuristics: the average relative difference between length (coverage) of constructed and optimal rules is at most 2.27% (7%, respectively). Furthermore, the quality of classifiers based on sets of inhibitory rules constructed by the considered heuristics are compared against each other, and the results show that the three best heuristics from the point of view classification accuracy coincides with the three well-performed heuristics from the point of view of rule length minimization.
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Hesselholt, Allan Tind
2007-01-01
satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...... to identify the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where...
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.
Sensitivity Analysis Of Technological And Material Parameters In Roll Forming
Gehring, Albrecht; Saal, Helmut
2007-05-01
Roll forming is applied for several decades to manufacture thin gauged profiles. However, the knowledge about this technology is still based on empirical approaches. Due to the complexity of the forming process, the main effects on profile properties are difficult to identify. This is especially true for the interaction of technological parameters and material parameters. General considerations for building a finite-element model of the roll forming process are given in this paper. A sensitivity analysis is performed on base of a statistical design approach in order to identify the effects and interactions of different parameters on profile properties. The parameters included in the analysis are the roll diameter, the rolling speed, the sheet thickness, friction between the tools and the sheet and the strain hardening behavior of the sheet material. The analysis includes an isotropic hardening model and a nonlinear kinematic hardening model. All jobs are executed parallel to reduce the overall time as the sensitivity analysis requires much CPU-time. The results of the sensitivity analysis demonstrate the opportunities to improve the properties of roll formed profiles by adjusting technological and material parameters to their optimum interacting performance.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
A global sensitivity analysis approach for morphogenesis models
Boas, Sonja E. M.
2015-11-21
Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.
Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk
2017-01-01
Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Brohus, Henrik
2009-01-01
Building performance can be expressed by different indicators such as primary energy use, environmental load and/or the indoor environmental quality and a building performance simulation can provide the decision maker with a quantitative measure of the extent to which an integrated design solutio...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....... satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where it is still...
Lazarus, P.; Brazier, A.; Hessels, J. W. T.; Karako-Argaman, C.; Kaspi, V. M.; Lynch, R.; Madsen, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Swiggum, J.; Zhu, W. W.; Allen, B.; Bogdanov, S.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R.; Freire, P. C. C.; Jenet, F. A.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Venkataraman, A.
2015-10-01
The on-going Arecibo Pulsar-ALFA (PALFA) survey began in 2004 and is searching for radio pulsars in the Galactic plane at 1.4 GHz. Here we present a comprehensive description of one of its main data reduction pipelines that is based on the PRESTO software and includes new interference-excision algorithms and candidate selection heuristics. This pipeline has been used to discover 40 pulsars, bringing the survey’s discovery total to 144 pulsars. Of the new discoveries, eight are millisecond pulsars (MSPs; P\\lt 10 ms) and one is a Fast Radio Burst (FRB). This pipeline has also re-detected 188 previously known pulsars, 60 of them previously discovered by the other PALFA pipelines. We present a novel method for determining the survey sensitivity that accurately takes into account the effects of interference and red noise: we inject synthetic pulsar signals with various parameters into real survey observations and then attempt to recover them with our pipeline. We find that the PALFA survey achieves the sensitivity to MSPs predicted by theoretical models but suffers a degradation for P≳ 100 ms that gradually becomes up to ˜10 times worse for P\\gt 4 {{s}} at {DM}\\lt 150 pc cm-3. We estimate 33 ± 3% of the slower pulsars are missed, largely due to red noise. A population synthesis analysis using the sensitivity limits we measured suggests the PALFA survey should have found 224 ± 16 un-recycled pulsars in the data set analyzed, in agreement with the 241 actually detected. The reduced sensitivity could have implications on estimates of the number of long-period pulsars in the Galaxy.
A new heuristic for the quadratic assignment problem
Zvi Drezner
2002-01-01
We propose a new heuristic for the solution of the quadratic assignment problem. The heuristic combines ideas from tabu search and genetic algorithms. Run times are very short compared with other heuristic procedures. The heuristic performed very well on a set of test problems.
A Heuristic Criterion for Instability to Fragmentation in Rotating, Interstellar Clouds
Boss, Alan Paul
1982-01-01
A heuristic criterion, based on linear perturbation analysis, is applied to the initial growth of density perturbations in isothermal or adiabatic gas clouds, with initially uniform density and uniform rotation. The heuristic criterion is shown to be consistent with the available results from numerical calculations of cloud collapse. The criterion predicts that perturbations varying as cos (m(phi)) will be most likely to grow when )pi is small, unless the cloud is nearly pressureless.
Configuration design sensitivity analysis and optimization of beam structures
Choi, J.-H.
A general method for configuration design sensitivity analysis over a three-dimensional beam structure is developed based on a variational formulation of the classical beam in linear elasticity. A sensitivity formula is derived based on a variational equation for the beam structure using the material derivative concept and adjoint variable method. The formulation considers not only the shape variation in a three dimensional direction, which includes translational as well as rotational change of the beam but also the orientation angle variation of the beam's cross section. The sensitivity formula can be evaluated with generality and ease even by employing a piecewise linear design velocity field despite the fact that the bending model is a fourth order differential equation. The design sensitivity analysis is implemented using the post-processing data of a commercial code ANSYS. Several numerical examples are given to show the excellent accuracy of the method. Optimization is carried out for a tilted arch bridge and an archgrid structure to show the method's applicability.
An overview of the design and analysis of simulation experiments for sensitivity analysis
Kleijnen, J.P.C.
2005-01-01
Sensitivity analysis may serve validation, optimization, and risk analysis of simulation models. This review surveys 'classic' and 'modern' designs for experiments with simulation models. Classic designs were developed for real, non-simulated systems in agriculture, engineering, etc. These designs
A Sensitivity Analysis of the Rigid Pavement Life-Cycle Cost Analysis Program
2000-12-01
Original Report Date: September 1999. This report describes the sensitivity analysis performed on the Rigid Pavement Life-Cycle Cost Analysis program, a computer program developed by the Center for Transportation Research for the Texas Department of ...
The Methods of Sensitivity Analysis and Their Usage for Analysis of Multicriteria Decision
Directory of Open Access Journals (Sweden)
Rūta Simanavičienė
2011-08-01
Full Text Available In this paper we describe the application's fields of the sensitivity analysis methods. We pass in review the application of these methods in multiple criteria decision making, when the initial data are numbers. We formulate the problem, which of the sensitivity analysis methods is more effective for the usage in the decision making process.Article in Lithuanian
Sensitization trajectories in childhood revealed by using a cluster analysis
DEFF Research Database (Denmark)
Schoos, Ann-Marie M.; Chawes, Bo L.; Melen, Erik
2017-01-01
biologically and clinically relevant. OBJECTIVE: We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. METHODS: We investigated 398 children from the at-risk Copenhagen...... Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent...... patterns explicitly characterizing temporal development of sensitization while clustering allergens and children. Subsequently, these patterns were investigated in relation to asthma, rhinitis, and eczema. Verification was sought in an independent unselected birth cohort (BAMSE) constituting 3051 children...
Hermawati, Setia; Lawson, Glyn
2016-09-01
Heuristics evaluation is frequently employed to evaluate usability. While general heuristics are suitable to evaluate most user interfaces, there is still a need to establish heuristics for specific domains to ensure that their specific usability issues are identified. This paper presents a comprehensive review of 70 studies related to usability heuristics for specific domains. The aim of this paper is to review the processes that were applied to establish heuristics in specific domains and identify gaps in order to provide recommendations for future research and area of improvements. The most urgent issue found is the deficiency of validation effort following heuristics proposition and the lack of robustness and rigour of validation method adopted. Whether domain specific heuristics perform better or worse than general ones is inconclusive due to lack of validation quality and clarity on how to assess the effectiveness of heuristics for specific domains. The lack of validation quality also affects effort in improving existing heuristics for specific domain as their weaknesses are not addressed. Copyright © 2016 Elsevier Ltd. All rights reserved.
A direct heuristic algorithm for linear programming
Indian Academy of Sciences (India)
Abstract. An (3) mathematically non-iterative heuristic procedure that needs no artificial variable is presented for solving linear programming problems. An optimality test is included. Numerical experiments depict the utility/scope of such a procedure.
Judgements with errors lead to behavioral heuristics
Ungureanu, S.
2016-01-01
A decision process robust to errors in the estimation of values, probabilities and times will employ heuristics that generate consistent apparent biases like loss aversion, nonlinear probability weighting with discontinuities and present bias.
Heuristics and Biases in Retirement Savings Behavior
Shlomo Benartzi; Richard Thaler
2007-01-01
Standard economic theories of saving implicitly assume that households have the cognitive ability to solve the relevant optimization problem and the willpower to execute the optimal plan. Both of the implicit assumptions are suspect. Even among economists, few spend much time calculating a personal optimal savings rate. Instead, most people cope by adopting simple heuristics, or rules of thumb. In this paper, we investigate both the heuristics and the biases that emerge in the area of retirem...
Investigations of quantum heuristics for optimization
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
Case Based Heuristic Selection for Timetabling Problems
Burke, Edmund; Petrovic, Sanja; Qu, Rong
2006-01-01
This paper presents a case-based heuristic selection approach for automated university course and exam timetabling. The method described in this paper is motivated by the goal of developing timetabling systems that are fundamentally more general than the current state of the art. Heuristics that worked well in previous similar situations are memorized in a case base and are retrieved for solving the problem in hand. Knowledge discovery techniques are employed in two distinct scenarios. Firstl...
Sensitivity Analysis of Launch Vehicle Debris Risk Model
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.
Directory of Open Access Journals (Sweden)
Aureliano eCrameri
2015-07-01
Full Text Available The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials. One flexible technique for statistical inference with missing data is multiple imputation (MI. Since methods such as MI rely on the assumption of missing data being at random (MAR, a sensitivity analysis for testing the robustness against departures from this assumption is required.In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45 and the Helping Alliance Questionnaire (HAQ in a sample of 260 outpatients.The sensitivity analysis can be used to (1 quantify the degree of bias introduced by missing not at random data (MNAR in a worst reasonable case scenario, (2 compare the performance of different analysis methods for dealing with missing data, or (3 detect the influence of possible violations to the model assumptions (e.g. lack of normality.Moreover, our analysis showed that ratings from the patient’s and therapist’s version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and nonrandomized effectiveness studies in the field of outpatient psychotherapy.
Sensitivity analysis of urban flood flows to hydraulic controls
Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah
2017-04-01
Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street
B1 -sensitivity analysis of quantitative magnetization transfer imaging.
Boudreau, Mathieu; Stikov, Nikola; Pike, G Bruce
2017-03-27
To evaluate the sensitivity of quantitative magnetization transfer (qMT) fitted parameters to B1 inaccuracies, focusing on the difference between two categories of T1 mapping techniques: B1 -independent and B1 -dependent. The B1 -sensitivity of qMT was investigated and compared using two T1 measurement methods: inversion recovery (IR) (B1 -independent) and variable flip angle (VFA), B1 -dependent). The study was separated into four stages: 1) numerical simulations, 2) sensitivity analysis of the Z-spectra, 3) healthy subjects at 3T, and 4) comparison using three different B1 imaging techniques. For typical B1 variations in the brain at 3T (±30%), the simulations resulted in errors of the pool-size ratio (F) ranging from -3% to 7% for VFA, and -40% to > 100% for IR, agreeing with the Z-spectra sensitivity analysis. In healthy subjects, pooled whole-brain Pearson correlation coefficients for F (comparing measured double angle and nominal flip angle B1 maps) were ρ = 0.97/0.81 for VFA/IR. This work describes the B1 -sensitivity characteristics of qMT, demonstrating that it varies substantially on the B1 -dependency of the T1 mapping method. Particularly, the pool-size ratio is more robust against B1 inaccuracies if VFA T1 mapping is used, so much so that B1 mapping could be omitted without substantially biasing F. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Social heuristics shape intuitive cooperation.
Rand, David G; Peysakhovich, Alexander; Kraft-Todd, Gordon T; Newman, George E; Wurzbacher, Owen; Nowak, Martin A; Greene, Joshua D
2014-04-22
Cooperation is central to human societies. Yet relatively little is known about the cognitive underpinnings of cooperative decision making. Does cooperation require deliberate self-restraint? Or is spontaneous prosociality reined in by calculating self-interest? Here we present a theory of why (and for whom) intuition favors cooperation: cooperation is typically advantageous in everyday life, leading to the formation of generalized cooperative intuitions. Deliberation, by contrast, adjusts behaviour towards the optimum for a given situation. Thus, in one-shot anonymous interactions where selfishness is optimal, intuitive responses tend to be more cooperative than deliberative responses. We test this 'social heuristics hypothesis' by aggregating across every cooperation experiment using time pressure that we conducted over a 2-year period (15 studies and 6,910 decisions), as well as performing a novel time pressure experiment. Doing so demonstrates a positive average effect of time pressure on cooperation. We also find substantial variation in this effect, and show that this variation is partly explained by previous experience with one-shot lab experiments.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Directory of Open Access Journals (Sweden)
Syed Hamid Hussain Madni
Full Text Available Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS, Minimum Completion Time (MCT, Minimum Execution Time (MET, Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Adapting Nielsen's Design Heuristics to Dual Processing for Clinical Decision Support.
Taft, Teresa; Staes, Catherine; Slager, Stacey; Weir, Charlene
2016-01-01
The study objective was to improve the applicability of Nielson's standard design heuristics for evaluating electronic health record (EHR) alerts and linked ordering support by integrating them with Dual Process theory. Through initial heuristic evaluation and a user study of 7 physicians, usability problems were identified. Through independent mapping of specific usability criteria to support for each of the Dual Cognitive processes (S1 and S2) and deliberation, agreement was reached on mapping criteria. Finally, usability errors from the heuristic and user study were mapped to S1 and S2. Adding a dual process perspective to specific heuristic analysis increases the applicability and relevance of computerized health information design evaluations. This mapping enables designers to measure that their systems are tailored to support attention allocation. System 1 will be supported by improving pattern recognition and saliency, and system 2 through efficiency and control of information access.
Adapting Nielsen’s Design Heuristics to Dual Processing for Clinical Decision Support
Taft, Teresa; Staes, Catherine; Slager, Stacey; Weir, Charlene
2016-01-01
The study objective was to improve the applicability of Nielson’s standard design heuristics for evaluating electronic health record (EHR) alerts and linked ordering support by integrating them with Dual Process theory. Through initial heuristic evaluation and a user study of 7 physicians, usability problems were identified. Through independent mapping of specific usability criteria to support for each of the Dual Cognitive processes (S1 and S2) and deliberation, agreement was reached on mapping criteria. Finally, usability errors from the heuristic and user study were mapped to S1 and S2. Adding a dual process perspective to specific heuristic analysis increases the applicability and relevance of computerized health information design evaluations. This mapping enables designers to measure that their systems are tailored to support attention allocation. System 1 will be supported by improving pattern recognition and saliency, and system 2 through efficiency and control of information access. PMID:28269915
Sensitivity and Uncertainty analysis of saltwater intrusion in coastal aquifers
Zhao, Z.; Jin, G.; Zhao, J.; Li, L.; Chen, X.; Tao, X.
2012-12-01
Aquifer heterogeneity has been a focus in uncertainty analysis of saltwater intrusion in coastal aquifers, especially the spatial variance of hydraulic conductivities. In this study, we investigated how inland and seaward boundary conditions may also contribute to the uncertainty in predicting saltwater intrusion in addition to the aquifer properties. Based on numerical simulations, the analysis focused on the salt-freshwater mixing zoon characterized by its location given by the contour line of 50% salt concentration of seawater and width of an area between the contour lines of 10% and 90% seawater concentrations. Sensitivity analysis was conducted first to identify the most influential factors on the location and width of the mixing zoon among tidal amplitude, freshwater influx rate, aquifer permeability, fluid viscosity and longitudinal dispersivity. Based on the results of the sensitivity analysis, an efficient sampling strategy was form to determine the parameter space for uncertainty analysis. The results showed that (1) both freshwater influx across the inland boundary and tidal oscillations on the seaward boundary imposed a retardation effect on the mixing zoon; and (2) seasonal variations of freshwater influx rate combined with tidal fluctuations of sea level led to great uncertainty with the simulated mixing zoon.
Species sensitivity analysis of heavy metals to freshwater organisms.
Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou
2015-10-01
Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.
Heuristic Modeling for TRMM Lifetime Predictions
Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.
1996-01-01
Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.
Expression Sensitivity Analysis of Human Disease Related Genes
Directory of Open Access Journals (Sweden)
Liang-Xiao Ma
2013-01-01
Full Text Available Background. Genome-wide association studies (GWAS have shown its revolutionary power in seeking the influenced loci on complex diseases genetically. Thousands of replicated loci for common traits are helpful in diseases risk assessment. However it is still difficult to elucidate the variations in these loci that directly cause susceptibility to diseases by disrupting the expression or function of a protein currently. Results. We evaluate the expression features of disease related genes and find that different diseases related genes show different expression perturbation sensitivities in various conditions. It is worth noting that the expression of some robust disease-genes doesn’t show significant change in their corresponding diseases, these genes might be easily ignored in the expression profile analysis. Conclusion. Gene ontology enrichment analysis indicates that robust disease-genes execute essential function in comparison with sensitive disease-genes. The diseases associated with robust genes seem to be relatively lethal like cancer and aging. On the other hand, the diseases associated with sensitive genes are apparently nonlethal like psych and chemical dependency diseases.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Sensitivity analysis practices: Strategies for model-based inference
Energy Technology Data Exchange (ETDEWEB)
Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)
2006-10-15
Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.
Multiobjective engineering design optimization problems: a sensitivity analysis approach
Directory of Open Access Journals (Sweden)
Oscar Brito Augusto
2012-12-01
Full Text Available This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Sensitivity analysis of critical experiments with evaluated nuclear data libraries
Energy Technology Data Exchange (ETDEWEB)
Fujiwara, D.; Kosaka, S. [Tepco Systems Corporation, Nuclear Engineering Dept., Tokyo (Japan)
2008-07-01
Criticality benchmark testing was performed with evaluated nuclear data libraries for thermal, low-enriched uranium fuel rod applications. C/E values for k{sub eff} were calculated with the continuous-energy Monte Carlo code MVP2 and its libraries generated from Endf/B-VI.8, Endf/B-VII.0, JENDL-3.3 and JEFF-3.1. Subsequently, the observed k{sub eff} discrepancies between libraries were decomposed to specify the source of difference in the nuclear data libraries using sensitivity analysis technique. The obtained sensitivity profiles are also utilized to estimate the adequacy of cold critical experiments to the boiling water reactor under hot operating condition. (authors)
Sensitivity Analysis Applied in Design of Low Energy Office Building
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik
2008-01-01
Building performance can be expressed by different indicators as primary energy use, environmental load and/or the indoor environmental quality and a building performance simulation can provide the decision maker with a quantitative measure of the extent to which an integrated design solution...... satisfies the design requirements and objectives. In the design of sustainable Buildings it is beneficial to identify the most important design parameters in order to develop more efficiently alternative design solutions or reach optimized design solutions. A sensitivity analysis makes it possible...... to identify the most important parameters in relation to building performance and to focus design and optimization of sustainable buildings on these fewer, but most important parameters. The sensitivity analyses will typically be performed at a reasonably early stage of the building design process, where...
Sensitivity Analysis of Hardwired Parameters in GALE Codes
Energy Technology Data Exchange (ETDEWEB)
Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.
2008-12-01
The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.
High derivatives for fast sensitivity analysis in linear magnetodynamics
Energy Technology Data Exchange (ETDEWEB)
Petin, P. [ENSIEG, Saint Martin d`Heres (France). Lab. d`Electrotechnique de Grenoble]|[FRMASOFT+CSI, Lyon (France); Coulomb, J.L. [ENSIEG, Saint Martin d`Heres (France). Lab. d`Electrotechnique de Grenoble; Conraux, P. [FRAMASOFT+CSI, Lyon (France)
1997-03-01
In this article, the authors present a method of sensitivity analysis using high derivatives and Taylor development. The principle is to find a polynomial approximation of the finite elements solution towards the sensitivity parameters. While presenting the method, they explain why this method is applicable with special parameters only. They applied it on a magnetodynamic problem, simple enough to be able to find the analytical solution with a formal calculus tool. They then present the implementation and the good results obtained with the polynomial, first by comparing the derivatives themselves, then by comparing the approximate solution with the theoretical one. After this validation, the authors present results on a real 2D application and they underline the possibilities of reuse in other fields of physics.
Heuristic space diversity management in a meta-hyper-heuristic framework
CSIR Research Space (South Africa)
Grobler, J
2014-07-01
Full Text Available IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6-11 July 2014 Heuristic Space Diversity Management in a Meta-Hyper- Heuristic Framework Jacomine Grobler1 and Andries P. Engelbrecht2 1Department of Industrial and Systems...
Biosphere dose conversion Factor Importance and Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
Sensitization trajectories in childhood revealed by using a cluster analysis.
Schoos, Ann-Marie M; Chawes, Bo L; Melén, Erik; Bergström, Anna; Kull, Inger; Wickman, Magnus; Bønnelykke, Klaus; Bisgaard, Hans; Rasmussen, Morten A
2017-12-01
Assessment of sensitization at a single time point during childhood provides limited clinical information. We hypothesized that sensitization develops as specific patterns with respect to age at debut, development over time, and involved allergens and that such patterns might be more biologically and clinically relevant. We sought to explore latent patterns of sensitization during the first 6 years of life and investigate whether such patterns associate with the development of asthma, rhinitis, and eczema. We investigated 398 children from the at-risk Copenhagen Prospective Studies on Asthma in Childhood 2000 (COPSAC2000) birth cohort with specific IgE against 13 common food and inhalant allergens at the ages of ½, 1½, 4, and 6 years. An unsupervised cluster analysis for 3-dimensional data (nonnegative sparse parallel factor analysis) was used to extract latent patterns explicitly characterizing temporal development of sensitization while clustering allergens and children. Subsequently, these patterns were investigated in relation to asthma, rhinitis, and eczema. Verification was sought in an independent unselected birth cohort (BAMSE) constituting 3051 children with specific IgE against the same allergens at 4 and 8 years of age. The nonnegative sparse parallel factor analysis indicated a complex latent structure involving 7 age- and allergen-specific patterns in the COPSAC2000 birth cohort data: (1) dog/cat/horse, (2) timothy grass/birch, (3) molds, (4) house dust mites, (5) peanut/wheat flour/mugwort, (6) peanut/soybean, and (7) egg/milk/wheat flour. Asthma was solely associated with pattern 1 (odds ratio [OR], 3.3; 95% CI, 1.5-7.2), rhinitis with patterns 1 to 4 and 6 (OR, 2.2-4.3), and eczema with patterns 1 to 3 and 5 to 7 (OR, 1.6-2.5). All 7 patterns were verified in the independent BAMSE cohort (R2 > 0.89). This study suggests the presence of specific sensitization patterns in early childhood differentially associated with development of clinical
Using tree diversity to compare phylogenetic heuristics.
Sul, Seung-Jin; Matthews, Suzanne; Williams, Tiffani L
2009-04-29
Evolutionary trees are family trees that represent the relationships between a group of organisms. Phylogenetic heuristics are used to search stochastically for the best-scoring trees in tree space. Given that better tree scores are believed to be better approximations of the true phylogeny, traditional evaluation techniques have used tree scores to determine the heuristics that find the best scores in the fastest time. We develop new techniques to evaluate phylogenetic heuristics based on both tree scores and topologies to compare Pauprat and Rec-I-DCM3, two popular Maximum Parsimony search algorithms. Our results show that although Pauprat and Rec-I-DCM3 find the trees with the same best scores, topologically these trees are quite different. Furthermore, the Rec-I-DCM3 trees cluster distinctly from the Pauprat trees. In addition to our heatmap visualizations of using parsimony scores and the Robinson-Foulds distance to compare best-scoring trees found by the two heuristics, we also develop entropy-based methods to show the diversity of the trees found. Overall, Pauprat identifies more diverse trees than Rec-I-DCM3. Overall, our work shows that there is value to comparing heuristics beyond the parsimony scores that they find. Pauprat is a slower heuristic than Rec-I-DCM3. However, our work shows that there is tremendous value in using Pauprat to reconstruct trees-especially since it finds identical scoring but topologically distinct trees. Hence, instead of discounting Pauprat, effort should go in improving its implementation. Ultimately, improved performance measures lead to better phylogenetic heuristics and will result in better approximations of the true evolutionary history of the organisms of interest.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Multiobjective hyper heuristic scheme for system design and optimization
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Prediction-based dynamic load-sharing heuristics
Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.
1993-01-01
The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Gene selection heuristic algorithm for nutrigenomics studies.
Valour, D; Hue, I; Grimard, B; Valour, B
2013-07-15
Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.
Quantifying Heuristic Bias: Anchoring, Availability, and Representativeness.
Richie, Megan; Josephson, S Andrew
2018-01-01
Construct: Authors examined whether a new vignette-based instrument could isolate and quantify heuristic bias. Heuristics are cognitive shortcuts that may introduce bias and contribute to error. There is no standardized instrument available to quantify heuristic bias in clinical decision making, limiting future study of educational interventions designed to improve calibration of medical decisions. This study presents validity data to support a vignette-based instrument quantifying bias due to the anchoring, availability, and representativeness heuristics. Participants completed questionnaires requiring assignment of probabilities to potential outcomes of medical and nonmedical scenarios. The instrument randomly presented scenarios in one of two versions: Version A, encouraging heuristic bias, and Version B, worded neutrally. The primary outcome was the difference in probability judgments for Version A versus Version B scenario options. Of 167 participants recruited, 139 enrolled. Participants assigned significantly higher mean probability values to Version A scenario options (M = 9.56, SD = 3.75) than Version B (M = 8.98, SD = 3.76), t(1801) = 3.27, p = .001. This result remained significant analyzing medical scenarios alone (Version A, M = 9.41, SD = 3.92; Version B, M = 8.86, SD = 4.09), t(1204) = 2.36, p = .02. Analyzing medical scenarios by heuristic revealed a significant difference between Version A and B for availability (Version A, M = 6.52, SD = 3.32; Version B, M = 5.52, SD = 3.05), t(404) = 3.04, p = .003, and representativeness (Version A, M = 11.45, SD = 3.12; Version B, M = 10.67, SD = 3.71), t(396) = 2.28, p = .02, but not anchoring. Stratifying by training level, students maintained a significant difference between Version A and B medical scenarios (Version A, M = 9.83, SD = 3.75; Version B, M = 9.00, SD = 3.98), t(465) = 2.29, p = .02, but not residents or attendings. Stratifying by heuristic and training level, availability maintained
Sensitivities of an intense Mediterranean cyclone: Analysis and validation
Homar, Victor; Stensrud, David J.
2004-10-01
On 10 and 11 November 2001 a deep cyclone moved northward across the western Mediterranean. Severe floods affected Algeria on 10 November and a mesoscale-sized region of strong damaging winds occurred over the Balearics and eastern Spain during the first hours of 11 November. These large intense cyclones, originating over north Africa and moving northward, are occasionally observed in the region. Numerical simulations of these types of events are potentially hampered by the lack of observations over the Mediterranean Sea, north Africa and the Atlantic Ocean. To evaluate more accurately the regions in which the model simulations are influenced by this lack of data, the MM5 adjoint system is used to determine the most sensitive areas within the initial conditions of the simulation of this 10-11 November event. Limitations of available adjoint models, such as their linear character, suggest that a test of the applicability of MM5 to the case under analysis is needed. In this study, the evaluation is performed by means of the tangent linear model and, despite finding that the adjoint has an acceptable accuracy, important nonlinear effects are found and attributed to the moist processes. The study tracks backward in time the sensitivities shown at different simulation times using parameters chosen to characterize the cyclone's intensity at 0000 UTC 11 November. Results reveal that the areas that show the largest sensitivities are located over north Africa for the 12 h and 24 h simulations, whereas south-western and western Europe emerge as areas with important sensitivities for the longer 36 h and 48 h simulations. Subsynoptic details regarding the shape and intensity of an upper-level trough, as well as a low-level cold front, are highlighted by the adjoint runs as the structures which influence most strongly the baroclinic development of the intense Mediterranean cyclone and the damaging surface winds it produces. The usefulness of the sensitivity fields in the
Sensitivity Analysis of a Riparian Vegetation Growth Model
Directory of Open Access Journals (Sweden)
Michael Nones
2016-11-01
Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.
Orbit uncertainty propagation and sensitivity analysis with separated representations
Balducci, Marc; Jones, Brandon; Doostan, Alireza
2017-09-01
Most approximations for stochastic differential equations with high-dimensional, non-Gaussian inputs suffer from a rapid (e.g., exponential) increase of computational cost, an issue known as the curse of dimensionality. In astrodynamics, this results in reduced accuracy when propagating an orbit-state probability density function. This paper considers the application of separated representations for orbit uncertainty propagation, where future states are expanded into a sum of products of univariate functions of initial states and other uncertain parameters. An accurate generation of separated representation requires a number of state samples that is linear in the dimension of input uncertainties. The computation cost of a separated representation scales linearly with respect to the sample count, thereby improving tractability when compared to methods that suffer from the curse of dimensionality. In addition to detailed discussions on their construction and use in sensitivity analysis, this paper presents results for three test cases of an Earth orbiting satellite. The first two cases demonstrate that approximation via separated representations produces a tractable solution for propagating the Cartesian orbit-state uncertainty with up to 20 uncertain inputs. The third case, which instead uses Equinoctial elements, reexamines a scenario presented in the literature and employs the proposed method for sensitivity analysis to more thoroughly characterize the relative effects of uncertain inputs on the propagated state.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
A Workflow for Global Sensitivity Analysis of PBPK Models
Directory of Open Access Journals (Sweden)
Kevin eMcNally
2011-06-01
Full Text Available Physiologically based pharmacokinetic models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilised to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined a workflow for sensitivity analysis of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot, which we believe is intuitive and appropriate for toxicologists, risk assessors and regulators.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier
Global sensitivity analysis of the Indian monsoon during the Pleistocene
Directory of Open Access Journals (Sweden)
P. A. Araya-Melo
2015-01-01
Full Text Available The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004 implemented following a three-step strategy: (1 development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters, CO2 concentration and a Northern Hemisphere glaciation index; (2 development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3 estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September, we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on
Intelligent System Design Using Hyper-Heuristics
Directory of Open Access Journals (Sweden)
Nelishia Pillay
2015-07-01
Full Text Available Determining the most appropriate search method or artificial intelligence technique to solve a problem is not always evident and usually requires implementation of the different approaches to ascertain this. In some instances a single approach may not be sufficient and hybridization of methods may be needed to find a solution. This process can be time consuming. The paper proposes the use of hyper-heuristics as a means of identifying which method or combination of approaches is needed to solve a problem. The research presented forms part of a larger initiative aimed at using hyper-heuristics to develop intelligent hybrid systems. As an initial step in this direction, this paper investigates this for classical artificial intelligence uninformed and informed search methods, namely depth first search, breadth first search, best first search, hill-climbing and the A* algorithm. The hyper-heuristic determines the search or combination of searches to use to solve the problem. An evolutionary algorithm hyper-heuristic is implemented for this purpose and its performance is evaluated in solving the 8-Puzzle, Towers of Hanoi and Blocks World problems. The hyper-heuristic employs a generational evolutionary algorithm which iteratively refines an initial population using tournament selection to select parents, which the mutation and crossover operators are applied to for regeneration. The hyper-heuristic was able to identify a search or combination of searches to produce solutions for the twenty 8-Puzzle, five Towers of Hanoi and five Blocks World problems. Furthermore, admissible solutions were produced for all problem instances.
GPU-based Integration with Application in Sensitivity Analysis
Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar
2010-05-01
The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is
Robust and sensitive analysis of mouse knockout phenotypes.
Directory of Open Access Journals (Sweden)
Natasha A Karp
Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.
Age Effects and Heuristics in Decision Making*
Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael
2011-01-01
Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects. PMID:22544977
Age Effects and Heuristics in Decision Making.
Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael
2012-05-01
Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects.
Heuristics for container loading of furniture
DEFF Research Database (Denmark)
Egeblad, Jens; Garavelli, Claudio; Lisi, Stefano
2010-01-01
. In the studied company, the problem arises hundreds of times daily during transport planning. Instances may contain more than one hundred different items with irregular shapes. To solve this complex problem we apply a set of heuristics successively that each solve one part of the problem. Large items...... are combined in specific structures to ensure proper protection of the items during transportation and to simplify the problem. The solutions generated by the heuristic has an average loading utilization of 91.3% for the most general instances with average running times around 100 seconds....
Heuristic Drift Elimination for Personnel Tracking Systems
Borenstein, Johann; Ojeda, Lauro
This paper pertains to the reduction of the effects of measurement errors in rate gyros used for tracking, recording, or monitoring the position of persons walking indoors. In such applications, bias drift and other gyro errors can degrade accuracy within minutes. To overcome this problem we developed the Heuristic Drift Elimination (HDE) method, that effectively corrects bias drift and other slow-changing errors. HDE works by making assumptions about walking in structured, indoor environments. The paper explains the heuristic assumptions and the HDE method, and shows experimental results. In typical applications, HDE maintains near-zero heading errors in walks of unlimited duration.
From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health
Kraus, Gerd
2016-01-01
Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR
The Analysis of Universty Student’s Interpersonal Sensitiveness and Rejection Sensitiveness
Atılgan ERÖZKAN
2004-01-01
The aim of this study is to compare the relationships between university students' interpersonal sensitivities-that defined as undue and excessive awareness of and sensitivity to the behavior and feelings of others- and rejection sensitivities. Gender, age, SES and grades differencess were also searched in this context. For this purpose 340 (170 females; 170 males) students are randomly recruited from KTU Fatih Faculty of Education's various departments. Main instruments are Information Gathe...
A Sensitivity Analysis Approach to Identify Key Environmental Performance Factors
Directory of Open Access Journals (Sweden)
Xi Yu
2014-01-01
Full Text Available Life cycle assessment (LCA is widely used in design phase to reduce the product’s environmental impacts through the whole product life cycle (PLC during the last two decades. The traditional LCA is restricted to assessing the environmental impacts of a product and the results cannot reflect the effects of changes within the life cycle. In order to improve the quality of ecodesign, it is a growing need to develop an approach which can reflect the changes between the design parameters and product’s environmental impacts. A sensitivity analysis approach based on LCA and ecodesign is proposed in this paper. The key environmental performance factors which have significant influence on the products’ environmental impacts can be identified by analyzing the relationship between environmental impacts and the design parameters. Users without much environmental knowledge can use this approach to determine which design parameter should be first considered when (redesigning a product. A printed circuit board (PCB case study is conducted; eight design parameters are chosen to be analyzed by our approach. The result shows that the carbon dioxide emission during the PCB manufacture is highly sensitive to the area of PCB panel.
Sensitivity Analysis in a Complex Marine Ecological Model
Directory of Open Access Journals (Sweden)
Marcos D. Mateus
2015-05-01
Full Text Available Sensitivity analysis (SA has long been recognized as part of best practices to assess if any particular model can be suitable to inform decisions, despite its uncertainties. SA is a commonly used approach for identifying important parameters that dominate model behavior. As such, SA address two elementary questions in the modeling exercise, namely, how sensitive is the model to changes in individual parameter values, and which parameters or associated processes have more influence on the results. In this paper we report on a local SA performed on a complex marine biogeochemical model that simulates oxygen, organic matter and nutrient cycles (N, P and Si in the water column, and well as the dynamics of biological groups such as producers, consumers and decomposers. SA was performed using a “one at a time” parameter perturbation method, and a color-code matrix was developed for result visualization. The outcome of this study was the identification of key parameters influencing model performance, a particularly helpful insight for the subsequent calibration exercise. Also, the color-code matrix methodology proved to be effective for a clear identification of the parameters with most impact on selected variables of the model.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Energy Technology Data Exchange (ETDEWEB)
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Sensitivity analysis and other improvements to tailored combinatorial library design
Martin; Wong
2000-03-01
"Tailoring" combinatorial libraries was developed several years ago as a very general and intuitive method to design diverse compound collections while controlling the profile of other pharmaceutically relevant properties. The candidate substituents were assigned to "categorical bins" according to their properties, and successive steps of D-optimal design were performed to generate diverse substituent sets consistent with required membership quotas from each bin. This serial algorithm was expedient to implement from existing D-optimal design codes, but was order-dependent and did not generally locate the very best possible design. A new "parallel" Fedorov search algorithm has now been implemented that can find the most diverse property-tailored design. An ambiguous mass penalty has been added, whereby most duplicate masses can be eliminated with little loss of library diversity. Sensitivity analysis has also been added to quantitatively explore the diversity trade-offs due to increasing or decreasing each specific kind of bias.
Sensitivity analysis of an information fusion tool: OWA operator
Zarghaami, Mahdi; Ardakanian, Reza; Szidarovszky, Ferenc
2007-04-01
The successful design and application of the Ordered Weighted Averaging (OWA) method as a decision making tool depend on the efficient computation of its order weights. The most popular methods for determining the order weights are the Fuzzy Linguistic Quantifiers approach and the Minimal Variability method which give different behavior patterns for OWA. These methods will be compared by using Sensitivity Analysis on the outputs of OWA with respect to the optimism degree of the decision maker. The theoretical results are illustrated in a water resources management problem. The Fuzzy Linguistic Quantifiers approach gives more information about the behavior of the OWA outputs in comparison to the Minimal Variability method. However, in using the Minimal Variability method, the OWA has a linear behavior with respect to the optimism degree and therefore it has better computation efficiency.
A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity
Tierney, G.; Posselt, D. J.; Booth, J. F.
2015-12-01
The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Energy Technology Data Exchange (ETDEWEB)
Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pohl, Andrew Phillip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jordan, Dirk [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Automated generation of constructive ordering heuristics for educational timetabling
Pillay, Nelishia; Özcan, Ender
2017-01-01
Construction heuristics play an important role in solving combinatorial optimization problems. These heuristics are usually used to create an initial solution to the problem which is improved using optimization techniques such as metaheuristics. For examination timetabling and university course timetabling problems essentially graph colouring heuristics have been used for this purpose. The process of deriving heuristics manually for educational timetabling is a time consuming task. Furthermor...
Sensitivity analysis on various parameters for lattice analysis of DUPIC fuel with WIMS-AECL code
Energy Technology Data Exchange (ETDEWEB)
Roh, Gyu Hong; Choi, Hang Bok; Park, Jee Won [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1997-12-31
The code WIMS-AECL has been used for the lattice analysis of DUPIC fuel. The lattice parameters calculated by the code is sensitive to the choice of number of parameters, such as the number of tracking lines, number of condensed groups, mesh spacing in the moderator region, other parameters vital to the calculation of probabilities and burnup analysis. We have studied this sensitivity with respect to these parameters and recommend their proper values which are necessary for carrying out the lattice analysis of DUPIC fuel.
A rescheduling heuristic for the single machine total tardiness problem
African Journals Online (AJOL)
In this paper, we propose a rescheduling heuristic for scheduling N jobs on a single machine in order to minimise total tardiness. The heuristic is of the interchange type and constructs a schedule from the modified due date (MDD) schedule. Unlike most interchange heuristics that consider interchanges involving only two ...
Heuristic Diagrams as a Tool to Teach History of Science
Chamizo, Jose A.
2012-01-01
The graphic organizer called here heuristic diagram as an improvement of Gowin's Vee heuristic is proposed as a tool to teach history of science. Heuristic diagrams have the purpose of helping students (or teachers, or researchers) to understand their own research considering that asks and problem-solving are central to scientific activity. The…
Heuristics Made Easy: An Effort-Reduction Framework
Shah, Anuj K.; Oppenheimer, Daniel M.
2008-01-01
In this article, the authors propose a new framework for understanding and studying heuristics. The authors posit that heuristics primarily serve the purpose of reducing the effort associated with a task. As such, the authors propose that heuristics can be classified according to a small set of effort-reduction principles. The authors use this…
Usage of Major Heuristics in Property Investment Valuation in Nigeria
African Journals Online (AJOL)
Toshiba
effect of heuristics, but concluded that experience and feedback should mitigate much bias. Tversky and Kahnemann (1974) identified three types of heuristics: representative; availability and anchoring and adjustment. Evans. (1989) later added a fourth: positivity (other lesser heuristics have subsequently been identified).
Heuristics for multiobjective multiple sequence alignment.
Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B
2016-07-15
Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show
Routing Post-Disaster Traffic Floods Heuristics
Nasralla, ZHA; Musa, MOI; El-Gorashi, TEH; Elmirghani, JMH
2016-01-01
In this paper, we present three heuristics for mitigating post-disaster traffic floods. First exploiting the excess capacity, second rerouting backup paths, finally redistributing the whole traffic by rerouting the working and protection paths to accommodate more floods. Using these mitigation approaches can reduce the blocking by up to 30%.
A Heuristic Bioinspired for 8-Piece Puzzle
Machado, M. O.; Fabres, P. A.; Melo, J. C. L.
2017-10-01
This paper investigates a mathematical model inspired by nature, and presents a Meta-Heuristic that is efficient in improving the performance of an informed search, when using strategy A * using a General Search Tree as data structure. The work hypothesis suggests that the investigated meta-heuristic is optimal in nature and may be promising in minimizing the computational resources required by an objective-based agent in solving high computational complexity problems (n-part puzzle) as well as In the optimization of objective functions for local search agents. The objective of this work is to describe qualitatively the characteristics and properties of the mathematical model investigated, correlating the main concepts of the A * function with the significant variables of the metaheuristic used. The article shows that the amount of memory required to perform this search when using the metaheuristic is less than using the A * function to evaluate the nodes of a general search tree for the eight-piece puzzle. It is concluded that the meta-heuristic must be parameterized according to the chosen heuristic and the level of the tree that contains the possible solutions to the chosen problem.
Heuristic estimates in shortest path algorithms
W.H.L.M. Pijls (Wim)
2007-01-01
textabstractShortest path problems occupy an important position in operations research as well as in artificial intelligence. In this paper we study shortest path algorithms that exploit heuristic estimates. The well-known algorithms are put into one framework. Besides, we present an interesting
Heuristic estimates in shortest path algorithms
W.H.L.M. Pijls (Wim)
2006-01-01
textabstractShortest path problems occupy an important position in Operations Research as well as in Arti¯cial Intelligence. In this paper we study shortest path algorithms that exploit heuristic estimates. The well-known algorithms are put into one framework. Besides we present an interesting
The Heuristic Interpretation of Box Plots
Lem, Stephanie; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim
2013-01-01
Box plots are frequently used, but are often misinterpreted by students. Especially the area of the box in box plots is often misinterpreted as representing number or proportion of observations, while it actually represents their density. In a first study, reaction time evidence was used to test whether heuristic reasoning underlies this…
Heuristics for speeding up gaze estimation
DEFF Research Database (Denmark)
Leimberg, Denis; Vester-Christensen, Martin; Ersbøll, Bjarne Kjær
2005-01-01
A deformable template method for eye tracking on full face images is presented. The strengths of the method are that it is fast and retains accuracy independently of the resolution. We compare the method with a state of the art active contour approach, showing that the heuristic method is more...
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Heuristic framework for parallel sorting computations | Nwanze ...
African Journals Online (AJOL)
The decreasing cost of these processors will probably in the future, make the solutions that are derived thereof to be more appealing. Efficient algorithms for sorting scheme that are encountered in a number of operations are considered for multi-user machines. A heuristic framework for exploiting parallelism inherent in ...
Bayesian networks: a combined tuning heuristic
Bolt, J.H.
2016-01-01
One of the issues in tuning an output probability of a Bayesian network by changing multiple parameters is the relative amount of the individual parameter changes. In an existing heuristic parameters are tied such that their changes induce locally a maximal change of the tuned probability. This
Heuristics for Knowledge Acquisition from Maps.
Thorndyke, Perry W.
This paper investigates how people acquire knowledge from maps. Emphasis is placed on heuristics--defined as the procedures that people use to select, combine, and encode map information in memory. The objective is to develop a theory of expertise in map learning by analyzing differences between fast and slow learners in terms of differences in…
A NEW HEURISTIC ALGORITHM FOR MULTIPLE TRAVELING SALESMAN PROBLEM
Directory of Open Access Journals (Sweden)
F. NURIYEVA
2017-06-01
Full Text Available The Multiple Traveling Salesman Problem (mTSP is a combinatorial optimization problem in NP-hard class. The mTSP aims to acquire the minimum cost for traveling a given set of cities by assigning each of them to a different salesman in order to create m number of tours. This paper presents a new heuristic algorithm based on the shortest path algorithm to find a solution for the mTSP. The proposed method has been programmed in C language and its performance analysis has been carried out on the library instances. The computational results show the efficiency of this method.
Global sensitivity analysis of analytical vibroacoustic transmission models
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2016-04-01
Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.
Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis
Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio
2006-01-01
Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.
Relative performance of academic departments using DEA with sensitivity analysis.
Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P
2009-05-01
The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in
Energy Technology Data Exchange (ETDEWEB)
Seo, Kyu Woo [Dongeui University, Pusan (Korea); Cho, Won Cheol [Yonsei University, Seoul (Korea)
1998-06-30
In this study, the new dimensionless values were defined and proposed to determine the parameters of urban runoff models based on the relative sensitivity analysis. Also, the sensitivity characteristics of each parameter were investigated. In order to analyze the parameter sensitivities of each model, total runoff ratio, peak runoff ratio, runoff sensitivity ratio, sensitivity ratio of total runoff, and sensitivity ratio of peak runoff were defined. Total runoff ratio(Q{sub TR}) = Total runoff of corresponding step / Maximum total runoff. Peak runoff ratio(Q{sub PR}) = Peak runoff corresponding step / Maximum peak runoff. Runoff sensitivity ratio(Q{sub SR}) = Q{sub TR} / Q{sub PR}. And for estimation of sensitivity ratios based on the scale of basin area, rainfall distributions and rainfall durations in ILLUDAS and SWMM, the reasonable ranges of parameters were proposed. (author). 21 refs., 2 tabs., 2 figs.
LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE
Bittker, D. A.
1994-01-01
LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis
Learning process mapping heuristics under stochastic sampling overheads
Ieumwananonthachai, Arthur; Wah, Benjamin W.
1991-01-01
A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.
Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.
Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier
2012-12-01
The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project. Copyright
Reconsidering "evidence" for fast-and-frugal heuristics.
Hilbig, Benjamin E
2010-12-01
In several recent reviews, authors have argued for the pervasive use of fast-and-frugal heuristics in human judgment. They have provided an overview of heuristics and have reiterated findings corroborating that such heuristics can be very valid strategies leading to high accuracy. They also have reviewed previous work that implies that simple heuristics are actually used by decision makers. Unfortunately, concerning the latter point, these reviews appear to be somewhat incomplete. More important, previous conclusions have been derived from investigations that bear some noteworthy methodological limitations. I demonstrate these by proposing a new heuristic and provide some novel critical findings. Also, I review some of the relevant literature often not-or only partially-considered. Overall, although some fast-and-frugal heuristics indeed seem to predict behavior at times, there is little to no evidence for others. More generally, the empirical evidence available does not warrant the conclusion that heuristics are pervasively used.
A Sensitivity Analysis of fMRI Balloon Model
Zayane, Chadia
2015-04-22
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Sensitivity analysis of near-infrared functional lymphatic imaging
Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon
2012-06-01
Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.
Sensitivity Analysis for the CLIC Damping Ring Inductive Adder
Holma, Janne
2012-01-01
The CLIC study is exploring the scheme for an electron-positron collider with high luminosity and a nominal centre-of-mass energy of 3 TeV. The CLIC pre-damping rings and damping rings will produce, through synchrotron radiation, ultra-low emittance beam with high bunch charge, necessary for the luminosity performance of the collider. To limit the beam emittance blow-up due to oscillations, the pulse generators for the damping ring kickers must provide extremely flat, high-voltage pulses. The specifications for the extraction kickers of the CLIC damping rings are particularly demanding: the flattop of the output pulse must be 160 ns duration, 12.5 kV and 250 A, with a combined ripple and droop of not more than ±0.02 %. An inductive adder allows the use of different modulation techniques and is therefore a very promising approach to meeting the specifications. PSpice has been utilised to carry out a sensitivity analysis of the predicted output pulse to the value of both individual and groups of circuit compon...
Surface-Sensitive Microwear Texture Analysis of Attrition and Erosion.
Ranjitkar, S; Turan, A; Mann, C; Gully, G A; Marsman, M; Edwards, S; Kaidonis, J A; Hall, C; Lekkas, D; Wetselaar, P; Brook, A H; Lobbezoo, F; Townsend, G C
2017-03-01
Scale-sensitive fractal analysis of high-resolution 3-dimensional surface reconstructions of wear patterns has advanced our knowledge in evolutionary biology, and has opened up opportunities for translatory applications in clinical practice. To elucidate the microwear characteristics of attrition and erosion in worn natural teeth, we scanned 50 extracted human teeth using a confocal profiler at a high optical resolution (X-Y, 0.17 µm; Z attrition. The teeth were divided into 4 groups, including 2 wear types (attrition and erosion) and 2 locations (anterior and posterior teeth; n = 12 for each anterior group, n = 13 for each posterior group) for 2 tissue types (enamel and dentine). The raw 3-dimensional data cloud was subjected to a newly developed rigorous standardization technique to reduce interscanner variability as well as to filter anomalous scanning data. Linear mixed effects (regression) analyses conducted separately for the dependent variables, complexity and anisotropy, showed the following effects of the independent variables: significant interactions between wear type and tissue type ( P = 0.0157 and P = 0.0003, respectively) and significant effects of location ( P attrition confirm our hypothesis. The greatest geometric means were noted in dentine erosion for complexity and dentine attrition for anisotropy. Dentine also exhibited microwear characteristics that were more consistent with wear types than enamel. Overall, our findings could complement macrowear assessment in dental clinical practice and research and could assist in the early detection and management of pathologic tooth wear.
Sensitivity analysis on parameters and processes affecting vapor intrusion risk
Picone, Sara
2012-03-30
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion. © 2012 SETAC.
Context Sensitive Article Ranking with Citation Context Analysis
Doslu, Metin
2015-01-01
It is hard to detect important articles in a specific context. Information retrieval techniques based on full text search can be inaccurate to identify main topics and they are not able to provide an indication about the importance of the article. Generating a citation network is a good way to find most popular articles but this approach is not context aware. The text around a citation mark is generally a good summary of the referred article. So citation context analysis presents an opportunity to use the wisdom of crowd for detecting important articles in a context sensitive way. In this work, we analyze citation contexts to rank articles properly for a given topic. The model proposed uses citation contexts in order to create a directed and weighted citation network based on the target topic. We create a directed and weighted edge between two articles if citation context contains terms related with the target topic. Then we apply common ranking algorithms in order to find important articles in this newly cre...
Multi-scale sensitivity analysis of pile installation using DEM
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Understanding earth system models: how Global Sensitivity Analysis can help
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
Sensitivity analysis in the WWTP modelling community – new opportunities and applications
DEFF Research Database (Denmark)
Sin, Gürkan; Ruano, M.V.; Neumann, Marc B.
2010-01-01
A mainstream viewpoint on sensitivity analysis in the wastewater modelling community is that it is a first-order differential analysis of outputs with respect to the parameters – typically obtained by perturbing one parameter at a time with a small factor. An alternative viewpoint on sensitivity...... analysis is related to uncertainty analysis, which attempts to relate the total uncertainty in the outputs to the uncertainty in the inputs. In this paper we evaluate and discuss two such sensitivity analysis methods for two different purposes/case studies: (i) Applying sensitivity analysis to a plant...
Considering Respiratory Tract Infections and Antimicrobial Sensitivity: An Exploratory Analysis
Directory of Open Access Journals (Sweden)
Amin, R.
2009-01-01
Full Text Available This study was conducted to observe the sensitivity and resistance of status of antibiotics for respiratory tract infection (RTI. Throat swab culture and sensitivity report of 383 patients revealed sensitivity profiles were observed with amoxycillin (7.9%, penicillin (33.7%, ampicillin (36.6%, co-trimoxazole (46.5%, azithromycin (53.5%, erythromycin (57.4%, cephalexin (69.3%, gentamycin (78.2%, ciprofloxacin (80.2%, cephradine (81.2%, ceftazidime (93.1%, ceftriaxone (93.1%. Sensitivity to cefuroxime was reported 93.1% cases. Resistance was found with amoxycillin (90.1%, ampicillin (64.1%, penicillin (61.4%, co-trimoxazole (43.6%, erythromycin (39.6%, and azithromycin (34.7%. Cefuroxime demonstrates high level of sensitivity than other antibiotics and supports its consideration with patients with upper RTI.
A System for Automatically Generating Scheduling Heuristics
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
Addressing Authorship Issues Prospectively: A Heuristic Approach.
Roberts, Laura Weiss
2017-02-01
Collaborative writing in academic medicine gives rise to more richly informed scholarship, and yet challenging ethical issues surrounding authorship are commonly encountered. International guidelines on authorship help clarify whether individuals who have contributed to a completed scholarly work have been correctly included as authors, but these guidelines do not facilitate intentional and proactive authorship planning or decisions regarding authorship order.In this Commentary, the author presents a heuristic approach to help collaborators clarify, anticipate, and resolve practical and ethically important authorship issues as they engage in the process of developing manuscripts. As this approach illustrates, assignment of authorship should balance work effort and professional responsibility, reflecting the effort and intellectual contribution and the public accountability of the individuals who participate in the work. Using a heuristic approach for managing authorship issues prospectively can foster an ethical, collaborative writing process in which individuals are properly recognized for their contributions.
Heuristics for the economic dispatch problem
Energy Technology Data Exchange (ETDEWEB)
Flores, Benjamin Carpio [Centro Nacional de Controle de Energia (CENACE), Mexico, D.F. (Mexico). Dept. de Planificacion Economica de Largo Plazo], E-mail: benjamin.carpo@cfe.gob.mx; Laureano Cruces, A.L.; Lopez Bracho, R.; Ramirez Rodriguez, J. [Universidad Autonoma Metropolitana (UAM), Mexico, D.F. (Brazil). Dept. de Sistemas], Emails: clc@correo.azc.uam.mx, rlb@correo.azc.uam.mx, jararo@correo.azc.uam.mx
2009-07-01
This paper presents GRASP (Greedy Randomized Adaptive Search Procedure), Simulated Annealing (SAA), Genetic (GA), and Hybrid Genetic (HGA) Algorithms for the economic dispatch problem (EDP), considering non-convex cost functions and dead zones the only restrictions, showing the results obtained. We also present parameter settings that are specifically applicable to the EDP, and a comparative table of results for each heuristic. It is shown that these methods outperform the classical methods without the need to assume convexity of the target function. (author)
Heuristics and Biases in Military Decision Making
2010-10-01
critical com- ponents increases, we find mathematically that the probability of event (or system) failure increases. However, we again find that...Uncertainty: Heuristics and Biases, ed. Daniel Kahneman and Amos Tversky (New York, Cambridge University Press, 1982), 156-57. It is similar to a quiz I...gave during my Game Theory class at West Point. 38. Mathematically , this problem can be solved using Bayesian inference. 39. Some may feel that the
Empirical heuristics for improving Intermittent Demand Forecasting
Petropoulos, Fotios; Nikolopoulos, Konstantinos; Spithourakis, George; Assimakopoulos, Vassilios
2013-01-01
Purpose– Intermittent demand appears sporadically, with some time periods not even displaying any demand at all. Even so, such patterns constitute considerable proportions of the total stock in many industrial settings. Forecasting intermittent demand is a rather difficult task but of critical importance for corresponding cost savings. The current study aims to examine the empirical outcomes of three heuristics towards the modification of established intermittent demand forecasting approaches...
Taxonicity of anxiety sensitivity: a multi-national analysis.
Bernstein, Amit; Zvolensky, Michael J; Kotov, Roman; Arrindell, Willem A; Taylor, Steven; Sandin, Bonifacio; Cox, Brian J; Stewart, Sherry H; Bouvard, Martine; Cardenas, Samuel Jurado; Eifert, Georg H; Schmidt, Norman B
2006-01-01
Taxometric coherent cut kinetic analyses were used to test the latent structure of anxiety sensitivity in samples from North America (Canada and United States of America), France, Mexico, Spain, and The Netherlands (total n = 2741). Anxiety sensitivity was indexed by the 36-item Anxiety Sensitivity Index--Revised (ASI-R; [J. Anxiety Disord. 12(5) (1998) 463]). Four manifest indicators of anxiety sensitivity were constructed using the ASI-R: fear of cardiovascular symptoms, fear of respiratory symptoms, fear of publicly observable anxiety reactions, and fear of mental incapacitation. Results from MAXCOV-HITMAX, internal consistency tests, analyses of simulated Monte Carlo data, and a MAMBAC external consistency test indicated that the latent structure of anxiety sensitivity was taxonic in each of the samples. The estimated base rate of the anxiety sensitivity taxon differed slightly between nations, ranging from 11.5 to 21.5%. In general, the four ASI-R based manifest indicators showed high levels of validity. Results are discussed in relation to the conceptual understanding of anxiety sensitivity, with specific emphasis on theoretical refinement of the construct.
The recognition heuristic: A decade of research
Directory of Open Access Journals (Sweden)
Gerd Gigerenzer
2011-02-01
Full Text Available The recognition heuristic exploits the basic psychological capacity for recognition in order to make inferences about unknown quantities in the world. In this article, we review and clarify issues that emerged from our initial work (Goldstein and Gigerenzer, 1999, 2002, including the distinction between a recognition and an evaluation process. There is now considerable evidence that (i the recognition heuristic predicts the inferences of a substantial proportion of individuals consistently, even in the presence of one or more contradicting cues, (ii people are adaptive decision makers in that accordance increases with larger recognition validity and decreases in situations when the validity is low or wholly indeterminable, and (iii in the presence of contradicting cues, some individuals appear to select different strategies. Little is known about these individual differences, or how to precisely model the alternative strategies. Although some researchers have attributed judgments inconsistent with the use of the recognition heuristic to compensatory processing, little research on such compensatory models has been reported. We discuss extensions of the recognition model, open questions, unanticipated results, and the surprising predictive power of recognition in forecasting.
Sorption of redox-sensitive elements: critical analysis
Energy Technology Data Exchange (ETDEWEB)
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.
Sensitivity analysis for lexicographic ordering in radiation therapy treatment planning
Long, T.; Matuszak, M.; Feng, M.; Fraass, B. A.; Ten Haken, R. K.; Romeijn, H. E.
2012-01-01
Purpose: To introduce a method to efficiently identify and calculate meaningful tradeoffs between criteria in an interactive IMRT treatment planning procedure. The method provides a systematic approach to developing high-quality radiation therapy treatment plans. Methods: Treatment planners consider numerous dosimetric criteria of varying importance that, when optimized simultaneously through multicriteria optimization, yield a Pareto frontier which represents the set of Pareto-optimal treatment plans. However, generating and navigating this frontier is a time-consuming, nontrivial process. A lexicographic ordering (LO) approach to IMRT uses a physician’s criteria preferences to partition the treatment planning decisions into a multistage treatment planning model. Because the relative importance of criteria optimized in the different stages may not necessarily constitute a strict prioritization, the authors introduce an interactive process, sensitivity analysis in lexicographic ordering (SALO), to allow the treatment planner control over the relative sequential-stage tradeoffs. By allowing this flexibility within a structured process, SALO implicitly restricts attention to and allows exploration of a subset of the Pareto efficient frontier that the physicians have deemed most important. Results: Improvements to treatment plans over a LO approach were found by implementing the SALO procedure on a brain case and a prostate case. In each stage, a physician assessed the tradeoff between previous stage and current stage criteria. The SALO method provided critical tradeoff information through curves approximating the relationship between criteria, which allowed the physician to determine the most desirable treatment plan. Conclusions: The SALO procedure provides treatment planners with a directed, systematic process to treatment plan selection. By following a physician’s prioritization, the treatment planner can avoid wasting effort considering clinically inferior
Analysis of Sea Ice Cover Sensitivity in Global Climate Model
Directory of Open Access Journals (Sweden)
V. P. Parhomenko
2014-01-01
Full Text Available The paper presents joint calculations using a 3D atmospheric general circulation model, an ocean model, and a sea ice evolution model. The purpose of the work is to analyze a seasonal and annual evolution of sea ice, long-term variability of a model ice cover, and its sensitivity to some parameters of model as well to define atmosphere-ice-ocean interaction.Results of 100 years simulations of Arctic basin sea ice evolution are analyzed. There are significant (about 0.5 m inter-annual fluctuations of an ice cover.The ice - atmosphere sensible heat flux reduced by 10% leads to the growth of average sea ice thickness within the limits of 0.05 m – 0.1 m. However in separate spatial points the thickness decreases up to 0.5 m. An analysis of the seasonably changing average ice thickness with decreasing, as compared to the basic variant by 0.05 of clear sea ice albedo and that of snow shows the ice thickness reduction in a range from 0.2 m up to 0.6 m, and the change maximum falls for the summer season of intensive melting. The spatial distribution of ice thickness changes shows, that on the large part of the Arctic Ocean there was a reduction of ice thickness down to 1 m. However, there is also an area of some increase of the ice layer basically in a range up to 0.2 m (Beaufort Sea. The 0.05 decrease of sea ice snow albedo leads to reduction of average ice thickness approximately by 0.2 m, and this value slightly depends on a season. In the following experiment the ocean – ice thermal interaction influence on the ice cover is estimated. It is carried out by increase of a heat flux from ocean to the bottom surface of sea ice by 2 W/sq. m in comparison with base variant. The analysis demonstrates, that the average ice thickness reduces in a range from 0.2 m to 0.35 m. There are small seasonal changes of this value.The numerical experiments results have shown, that an ice cover and its seasonal evolution rather strongly depend on varied parameters
Stochastic sensitivity analysis using HDMR and score function
Indian Academy of Sciences (India)
Keywords. Stochastic sensitivity; structural reliability; high dimensional model representation; score function; statistical moment. ... School of Engineering, Swansea University, Swansea, SA2 8PP, UK; Structural Engineering Division, Department of Civil Engineering, Indian Institute of Technology Madras, Chennai 600 036 ...
Application of simplified model to sensitivity analysis of solidification process
Directory of Open Access Journals (Sweden)
R. Szopa
2007-12-01
Full Text Available The sensitivity models of thermal processes proceeding in the system casting-mould-environment give the essential information concerning the influence of physical and technological parameters on a course of solidification. Knowledge of time-dependent sensitivity field is also very useful in a case of inverse problems numerical solution. The sensitivity models can be constructed using the direct approach, this means by differentiation of basic energy equations and boundary-initial conditions with respect to parameter considered. Unfortunately, the analytical form of equations and conditions obtained can be very complex both from the mathematical and numerical points of view. Then the other approach consisting in the application of differential quotient can be applied. In the paper the exact and approximate approaches to the modelling of sensitivity fields are discussed, the examples of computations are also shown.
Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation
DEFF Research Database (Denmark)
Witt, Carsten
2012-01-01
The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1...
DEFF Research Database (Denmark)
Nuño Martinez, Edgar; Cutululis, Nicolaos Antonio
2016-01-01
-case scenario analysis. This paper presents a general-purpose methodology intended to generate plausible operating states. The main focus lies on the generation of correlated random samples using a heuristic of the NORmal-to-Anything (NORTA) method. The proposed methodology was applied to model wind generation...
Tight Bounds on the Optimization Time of a Randomized Search Heuristic on Linear Functions
DEFF Research Database (Denmark)
Witt, Carsten
2013-01-01
The analysis of randomized search heuristics on classes of functions is fundamental to the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of a simple...
The Historical Ideal-Type as a Heuristic Device for Academic Storytelling by Sport Scholars
Tutka, Patrick; Seifried, Chad
2015-01-01
The goal of this research endeavor is to take the previous calls of sport scholars to expand into alternative research approaches (e.g., history, case study, law reviews, philosophy, etc.) and to show how storytelling can be an effective tool through the use of a heuristic device. The present analysis attempts to focus on the usage of the…
Exploring Higher Education Governance: Analytical Models and Heuristic Frameworks
Directory of Open Access Journals (Sweden)
Burhan FINDIKLI
2017-08-01
Full Text Available Governance in higher education, both at institutional and systemic levels, has experienced substantial changes within recent decades because of a range of world-historical processes such as massification, growth, globalization, marketization, public sector reforms, and the emergence of knowledge economy and society. These developments have made governance arrangements and decision-making processes in higher education more complex and multidimensional more than ever and forced scholars to build new analytical and heuristic tools and strategies to grasp the intricacy and diversity of higher education governance dynamics. This article provides a systematic discussion of how and through which tools prominent scholars of higher education have analyzed governance in this sector by examining certain heuristic frameworks and analytical models. Additionally, the article shows how social scientific analysis of governance in higher education has proceeded in a cumulative way with certain revisions and syntheses rather than radical conceptual and theoretical ruptures from Burton R. Clark’s seminal work to the present, revealing conceptual and empirical junctures between them.
Finding Solutions to Sudoku Puzzles Using Human Intuitive Heuristics
Directory of Open Access Journals (Sweden)
Nelishia Pillay
2012-09-01
Full Text Available Sudoku is a logical puzzle that has achieved international popularity. Given this, there have been a number of computer solvers developed for this puzzle. Various methods including genetic algorithms, simulated annealing, particle swarm optimization and harmony search have been evaluated for this purpose. The approach described in this paper combines human intuition and optimization to solve Sudoku problems. The main contribution of this paper is a set of heuristic moves, incorporating human expertise, to solve Sudoku puzzles. The paper investigates the use of genetic programming to optimize a space of programs composed of these heuristics moves, with the aim of evolving a program that can produce a solution to the Sudoku problem instance. Each program is a combination of randomly selected moves. The approach was tested on 1800 Sudoku puzzles of differing difficulty. The approach presented was able to solve all 1800 problems, with a majority of these problems being solved in under a second. For a majority of the puzzles evolution was not needed and random combinations of the moves created during the initial population produced solutions. For the more difficult problems at least one generation of evolution was needed to find a solution. Further analysis revealed that solution programs for the more difficult problems could be found by enumerating random combinations of the move operators, however at a cost of higher runtimes. The performance of the approach presented was found to be comparable to other methods used to solve Sudoku problems and in a number of cases produced better results.
Fuel lattice design using heuristics and new strategies
Energy Technology Data Exchange (ETDEWEB)
Ortiz S, J. J.; Castillo M, J. A.; Torres V, M.; Perusquia del Cueto, R. [ININ, Carretera Mexico-Toluca s/n, Ocoyoacac 52750, Estado de Mexico (Mexico); Pelta, D. A. [ETS Ingenieria Informatica y Telecomunicaciones, Universidad de Granada, Daniel Saucedo Aranda s/n, 18071 Granada (Spain); Campos S, Y., E-mail: juanjose.ortiz@inin.gob.m [IPN, Escuela Superior de Fisica y Matematicas, Unidad Profesional Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)
2010-10-15
This work show some results of the fuel lattice design in BWRs when some allocation pin rod rules are not taking into account. Heuristics techniques like Path Re linking and Greedy to design fuel lattices were used. The scope of this work is to search about how do classical rules in design fuel lattices affect the heuristics techniques results and the fuel lattice quality. The fuel lattices quality is measured by Power Peaking Factor and Infinite Multiplication Factor at the beginning of the fuel lattice life. CASMO-4 code to calculate these parameters was used. The analyzed rules are the following: pin rods with lowest uranium enrichment are only allocated in the fuel lattice corner, and pin rods with gadolinium cannot allocated in the fuel lattice edge. Fuel lattices with and without gadolinium in the main diagonal were studied. Some fuel lattices were simulated in an equilibrium cycle fuel reload, using Simulate-3 to verify their performance. So, the effective multiplication factor and thermal limits can be verified. The obtained results show a good performance in some fuel lattices designed, even thought, the knowing rules were not implemented. A fuel lattice performance and fuel lattice design characteristics analysis was made. To the realized tests, a dell workstation was used, under Li nux platform. (Author)
Sensitivity analysis of conservation targets in systematic conservation planning.
Levin, Noam; Mazor, Tessa; Brokovich, Eran; Jablon, Pierre-Elie; Kark, Salit
2015-10-01
flexibility in a conservation network is adequate when ~10-20% of the study area is considered irreplaceable (selection frequency values over 90%). This approach offers a useful sensitivity analysis when applying target-based systematic conservation planning tools, ensuring that the resulting protected area conservation network offers more choices for managers and decision makers.
Heuristic space diversity management in a meta-hyper-heuristic framework
CSIR Research Space (South Africa)
Grobler, J
2014-07-01
Full Text Available stream_source_info Grobler1_2014.pdf.txt stream_content_type text/plain stream_size 35657 Content-Encoding UTF-8 stream_name Grobler1_2014.pdf.txt Content-Type text/plain; charset=UTF-8 Heuristic Space Diversity... Management in a Meta-Hyper-Heuristic Framework Jacomine Grobler1 and Andries P. Engelbrecht2 1Department of Industrial and Systems Engineering University of Pretoria and Council for Scientific and Industrial Research Email: jacomine.grobler@gmail.com 2...
A HYBRID HEURISTIC ALGORITHM FOR THE CLUSTERED TRAVELING SALESMAN PROBLEM
Directory of Open Access Journals (Sweden)
Mário Mestria
2016-04-01
Full Text Available ABSTRACT This paper proposes a hybrid heuristic algorithm, based on the metaheuristics Greedy Randomized Adaptive Search Procedure, Iterated Local Search and Variable Neighborhood Descent, to solve the Clustered Traveling Salesman Problem (CTSP. Hybrid Heuristic algorithm uses several variable neighborhood structures combining the intensification (using local search operators and diversification (constructive heuristic and perturbation routine. In the CTSP, the vertices are partitioned into clusters and all vertices of each cluster have to be visited contiguously. The CTSP is -hard since it includes the well-known Traveling Salesman Problem (TSP as a special case. Our hybrid heuristic is compared with three heuristics from the literature and an exact method. Computational experiments are reported for different classes of instances. Experimental results show that the proposed hybrid heuristic obtains competitive results within reasonable computational time.
Cultural heuristics in risk assessment of HIV/AIDS.
Bailey, Ajay; Hutter, Inge
2006-01-01
Behaviour change models in HIV prevention tend to consider that risky sexual behaviours reflect risk assessments and that by changing risk assessments behaviour can be changed. Risk assessment is however culturally constructed. Individuals use heuristics or bounded cognitive devices derived from broader cultural meaning systems to rationalize uncertainty. In this study, we identify some of the cultural heuristics used by migrant men in Goa, India to assess their risk of HIV infection from different sexual partners. Data derives from a series of in-depth interviews and a locally informed survey. Cultural heuristics identified include visual heuristics, heuristics of gender roles, vigilance and trust. The paper argues that, for more culturally informed HIV/AIDS behaviour change interventions, knowledge of cultural heuristics is essential.
Identifying product development crises: The potential of adaptive heuristics
DEFF Research Database (Denmark)
Münzberger, C.; Stingl, Verena; Oehmen, Josef
2017-01-01
This paper introduces adaptive heuristics as a tool to identify crises in design projects and highlights potential applications of these heuristics as decision support tool for crisis identification. Crises may emerge slowly or suddenly, and often have ambiguous signals. Thus the identification...... of a project crisis is often difficult. Yet, to allow fast crisis response, timely identification is critical for successful crisis management. Adaptive heuristics are judgement strategies that can strive in circumstances of limited and ambiguous information. This article presents a theoretical proposition...... for the application of heuristics in design sciences. To achieve this, the paper compares crises to 'business as usual', and presents sixteen indicators for emerging crises. These indicators are potential cues for adaptive heuristics. Specifically three adaptive heuristics, One-single-cue, Fast...
Cadmium telluride nanocrystals as luminescent sensitizers in flow analysis.
Fortes, Paula R; Frigerio, Christian; Silvestre, Cristina I C; Santos, João L M; Lima, José L F C; Zagatto, Elias A G
2011-06-15
A fully automated multipumping flow system (MPFS) using water-soluble CdTe quantum dots (QD) as sensitizers is proposed for the chemiluminometric determination of the anti-diabetic drugs gliclazide and glipizide in pharmaceutical formulations. The nanocrystals acted as enhancers of the weak CL emission produced upon oxidation of sulphite by Ce(IV) in acidic medium, thus improving sensitivity and expanding the dynamical analytical concentration range. By interacting with the QD, the two analytes prevented their sensitizing effect yielding a chemiluminescence quenching of the Ce(IV)-SO(3)(2-)CdTe QD system. The pulsed flow inherent to MPFS assured a fast and efficient mixing of all solutions inside the flow cell, circumventing the need for a reaction coil and facilitating the monitoring of the short-lived generated chemiluminescent species. QD crystal size, concentration and spectral region for measurement were investigated. Copyright © 2011 Elsevier B.V. All rights reserved.
Remarks on variational sensitivity analysis of elastoplastic deformations
Barthold, Franz-Joseph; Liedmann, Jan
2017-10-01
Design optimisation of structures and materials becomes more important in most engineering disciplines, especially in forming. The treatment of inelastic, path-dependent materials is a recent topic in this connection. Unlike purely elastic materials, it is necessary to store and analyse the deformation history in order to appropriately describe path-dependent material behaviour. For structural optimisation with design variables such as the outer shape of a structure, the boundary conditions and the material properties, it is necessary to compute sensitivities of all quantities of influence to use gradient based optimisation algorithms. Considering path-dependent materials, this includes the sensitivities of internal variables that represent the deformation history. We present an algorithm to compute afore-mentioned sensitivities, based on variational principles, in the context of finite deformation elastoplasticity. The novel approach establishes the possibility of design exploration using singular value decomposition.
Analysis of Natural Sensitizers to Enhance the Efficiency in Dye Sensitized Solar Cell
Directory of Open Access Journals (Sweden)
S.Rajkumar
2016-05-01
Full Text Available Three vegetable dyes are used for the study: anthocyanin dye from pomegranate arils extract, betalain dye from beet root extract and chlorophyll dye from tridax procumbens leaf. The anthocyanin and betalain, anthocyanin and chlorophyll, betalain and chlorophyll dyes are blended in cocktail in equal proportions, by volume. This study determines the effect of different extraction concentrations and different vegetable dyes on energy gap using dye sensitized solar cells. The experimental results show that the cocktail dye blended using extracts of pomegranate arils, beet root and tridax procumbens leaf, in the volumetric proportion 1:1, using an extraction at room temperature the greatest energy gap (eg of up to 1.87eV.
Proposing New Heuristic Approaches for Preventive Maintenance Scheduling
Directory of Open Access Journals (Sweden)
majid Esmailian
2013-08-01
Full Text Available The purpose of preventive maintenance management is to perform a series of tasks that prevent or minimize production breakdowns and improve reliability of production facilities. An important objective of preventive maintenance management is to minimize downtime of production facilities. In order to accomplish this objective, personnel should efficiently allocate resources and determine an effective maintenance schedule. Gopalakrishnan (1997 developed a mathematical model and four heuristic approaches to solve the preventive maintenance scheduling problem of assigning skilled personnel to work with tasks that require a set of corresponding skills. However, there are several limitations in the prior work in this area of research. The craft combination problem has not been solved because the craft combination is assumed as given. The craft combination problem concerns the computation of all combinations of assigning multi skilled workers to accomplishment of a particular task. In fact, determining craft combinations is difficult because of the exponential number of craft combinations that are possible. This research provides a heuristic approach for determining the craft combination and four new heuristic approach solution for the preventive maintenance scheduling problem with multi skilled workforce constraints. In order to examine the new heuristic approach and to compare the new heuristic approach with heuristic approach of Gopalakrishnan (1997, 81 standard problems have been generated based on the criterion suggested by from Gopalakrishnan (1997. The average solution quality (SQ of the new heuristic approaches is 1.86% and in old heuristic approaches is 8.32%. The solution time of new heuristic approaches are shorter than old heuristic approaches. The solution time of new heuristic approaches is 0.78 second and old heuristic approaches is 6.43 second, but the solution time of mathematical model provided by Gopalakrishnan (1997 is 152 second.
Why less can be more: A Bayesian framework for heuristics
Parpart, Paula
2017-01-01
When making decisions under uncertainty, one common view is that people rely on simple heuristics that deliberately ignore information. One of the greatest puzzles in cognitive science concerns why heuristics can sometimes outperform full-information models, such as linear regression, which make full use of the available information. In this thesis, I will contribute the novel idea that heuristics can be thought of as embodying extreme Bayesian priors. Thereby, an explanation for less-is-more...
Discovery of IPV6 Router Interface Addresses via Heuristic Methods
2015-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DISCOVERY OF IPV6 ROUTER INTERFACE ADDRESSES VIA HEURISTIC METHODS by Matthew D. Gray September...AND SUBTITLE DISCOVERY OF IPV6 ROUTER INTERFACE ADDRESSES VIA HEURISTIC METHODS 5. FUNDING NUMBERS CNS-1111445 6. AUTHOR(S) Matthew D. Gray 7...focuses on IPv6 router infrastructure and examines the possibility of using heuristic methods in order to discover IPv6 router interfaces. We consider two
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
Sensitivity based reduced approaches for structural reliability analysis
Indian Academy of Sciences (India)
The difﬁculty in computing the failure probability increases rapidly with the number of variables. In this paper, a ... Based on the sensitivity of the failure surface, three new reduction methods, namely ... Department of Aerospace Engineering, School of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP, UK ...
Fecal bacteria source characterization and sensitivity analysis of SWAT 2005
The Soil and Water Assessment Tool (SWAT) version 2005 includes a microbial sub-model to simulate fecal bacteria transport at the watershed scale. The objectives of this study were to demonstrate methods to characterize fecal coliform bacteria (FCB) source loads and to assess the model sensitivity t...
Comparative Analysis of Intercultural Sensitivity among Teachers Working with Refugees
Strekalova-Hughes, Ekaterina
2017-01-01
The unprecedented global refugee crisis and the accompanying political discourse places added pressures on teachers working with children who are refugees in resettling countries. Given the increased chances of having a refugee child in one's classroom, it is critical to explore how interculturally sensitive teachers are and if working with…
Ryan, Jason C; Banerjee, Ashis Gopal; Cummings, Mary L; Roy, Nicholas
2014-06-01
Planning operations across a number of domains can be considered as resource allocation problems with timing constraints. An unexplored instance of such a problem domain is the aircraft carrier flight deck, where, in current operations, replanning is done without the aid of any computerized decision support. Rather, veteran operators employ a set of experience-based heuristics to quickly generate new operating schedules. These expert user heuristics are neither codified nor evaluated by the United States Navy; they have grown solely from the convergent experiences of supervisory staff. As unmanned aerial vehicles (UAVs) are introduced in the aircraft carrier domain, these heuristics may require alterations due to differing capabilities. The inclusion of UAVs also allows for new opportunities for on-line planning and control, providing an alternative to the current heuristic-based replanning methodology. To investigate these issues formally, we have developed a decision support system for flight deck operations that utilizes a conventional integer linear program-based planning algorithm. In this system, a human operator sets both the goals and constraints for the algorithm, which then returns a proposed schedule for operator approval. As a part of validating this system, the performance of this collaborative human-automation planner was compared with that of the expert user heuristics over a set of test scenarios. The resulting analysis shows that human heuristics often outperform the plans produced by an optimization algorithm, but are also often more conservative.
A Graph Search Heuristic for Shortest Distance Paths
Energy Technology Data Exchange (ETDEWEB)
Chow, E
2005-03-24
This paper presents a heuristic for guiding A* search for finding the shortest distance path between two vertices in a connected, undirected, and explicitly stored graph. The heuristic requires a small amount of data to be stored at each vertex. The heuristic has application to quickly detecting relationships between two vertices in a large information or knowledge network. We compare the performance of this heuristic with breadth-first search on graphs with various topological properties. The results show that one or more orders of magnitude improvement in the number of vertices expanded is possible for large graphs, including Poisson random graphs.
Sensitivity analysis of the GNSS derived Victoria plate motion
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the
Efficient heuristics for the Rural Postman Problem
Directory of Open Access Journals (Sweden)
GW Groves
2005-06-01
Full Text Available A local search framework for the (undirected Rural Postman Problem (RPP is presented in this paper. The framework allows local search approaches that have been applied successfully to the well–known Travelling Salesman Problem also to be applied to the RPP. New heuristics for the RPP, based on this framework, are introduced and these are capable of solving significantly larger instances of the RPP than have been reported in the literature. Test results are presented for a number of benchmark RPP instances in a bid to compare efficiency and solution quality against known methods.
Heuristic theory of positron-helium scattering.
Drachman, R. J.
1971-01-01
An error in a previous modified adiabatic approximation (Drachman, 1966), due to a lack of generality in the form of the short-range correlation part of the wave function for L greater than zero, is corrected heuristically by allowing the monopole suppression parameter to depend on L. An L-dependent local potential is constructed to fit the well-known positron-hydrogen s, p, and d wave phase shifts below the rearrangement threshold. The same form of potential yields a positron-helium cross-section in agreement with a recent experimental measurement near threshold.
Heuristics for the Robust Coloring Problem
Directory of Open Access Journals (Sweden)
Miguel Ángel Gutiérrez Andrade
2011-03-01
Full Text Available Let $G$ and $\\bar{G}$ be complementary graphs. Given a penalty function defined on the edges of $G$, we will say that the rigidity of a $k$-coloring of $G$ is the sum of the penalties of the edges of G joining vertices of the same color. Based on the previous definition, the Robust Coloring Problem (RCP is stated as the search of the minimum rigidity $k$-coloring. In this work a comparison of heuristics based on simulated annealing, GRASP and scatter search is presented. These are the best results for the RCP that have been obtained.
3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA
Energy Technology Data Exchange (ETDEWEB)
Flach, Greg [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Wohlwend, Jen [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-10-02
This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.
Sensitivity Analysis of Transonic Flow over J-78 Wings
Directory of Open Access Journals (Sweden)
Alexander Kuzmin
2015-01-01
Full Text Available 3D transonic flow over swept and unswept wings with an J-78 airfoil at spanwise sections is studied numerically at negative and vanishing angles of attack. Solutions of the unsteady Reynolds-averaged Navier-Stokes equations are obtained with a finite-volume solver on unstructured meshes. The numerical simulation shows that adverse Mach numbers, at which the lift coefficient is highly sensitive to small perturbations, are larger than those obtained earlier for 2D flow. Due to the larger Mach numbers, there is an onset of self-exciting oscillations of shock waves on the wings. The swept wing exhibits a higher sensitivity to variations of the Mach number than the unswept one.
Sensitive Detection of Deliquescent Bacterial Capsules through Nanomechanical Analysis.
Nguyen, Song Ha; Webb, Hayden K
2015-10-20
Encapsulated bacteria usually exhibit strong resistance to a wide range of sterilization methods, and are often virulent. Early detection of encapsulation can be crucial in microbial pathology. This work demonstrates a fast and sensitive method for the detection of encapsulated bacterial cells. Nanoindentation force measurements were used to confirm the presence of deliquescent bacterial capsules surrounding bacterial cells. Force/distance approach curves contained characteristic linear-nonlinear-linear domains, indicating cocompression of the capsular layer and cell, indentation of the capsule, and compression of the cell alone. This is a sensitive method for the detection and verification of the encapsulation status of bacterial cells. Given that this method was successful in detecting the nanomechanical properties of two different layers of cell material, i.e. distinguishing between the capsule and the remainder of the cell, further development may potentially lead to the ability to analyze even thinner cellular layers, e.g. lipid bilayers.
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
2015-05-18
surfaces. These features differentiate the GHV from previous coarse hypersonic vehicle models where the integration of the propulsion system and...A TRIDENT SCHOLAR PROJECT REPORT NO. 442 Computational Sensitivity Analysis for the Aerodynamic Design of Supersonic and Hypersonic Air...Scholar project report; no. 442 (2015) Computational Sensitivity Analysis for the Aerodynamic Design of Supersonic and Hypersonic Air Vehicles by
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Analysis of Consumers' Preferences and Price Sensitivity to Native Chickens.
Lee, Min-A; Jung, Yoojin; Jo, Cheorun; Park, Ji-Young; Nam, Ki-Chang
2017-01-01
This study analyzed consumers' preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens.
Analysis of Consumers’ Preferences and Price Sensitivity to Native Chickens
Lee, Min-A; Jung, Yoojin; Jo, Cheorun
2017-01-01
This study analyzed consumers’ preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens. PMID:28747834
Sensitivity analysis and uncertainty quantification for environmental models******
Directory of Open Access Journals (Sweden)
Cartailler Thomas
2014-01-01
Full Text Available Environmental models often involve complex dynamic and spatial inputs and outputs. This raises specific issues when performing uncertainty and sensitivity analyses (SA. Based on applications in flood risk assessment and agro-ecology, we present current research to adapt the methods of variance-based SA to such models. After recalling the basic principles, we propose a metamodelling approach of dynamic models based on a reduced-basis approximation of PDEs and we show how the error on the subsequent sensitivity indices can be quantified. We then present a mix of pragmatic and methodological solutions to perform the SA of a dynamic agro-climatic model with non standard input factors. SA is then applied to a flood risk model with spatially distributed inputs and outputs. Block sensitivity indices are defined and a precise relationship between these indices and their support size is established. Finally, we show how the whole support landscape and its key features can be incorporated in the SA of a spatial model.
Case-Based Reasoning as a Heuristic Selector in a Hyper-Heuristic for Course Timetabling Problems
Petrovic, Sanja; Qu, Rong
2002-01-01
This paper studies Knowledge Discovery (KD) using Tabu Search and Hill Climbing within Case-Based Reasoning (CBR) as a hyper-heuristic method for course timetabling problems. The aim of the hyper-heuristic is to choose the best heuristic(s) for given timetabling problems according to the knowledge stored in the case base. KD in CBR is a 2-stage iterative process on both case representation and the case base. Experimental results are analysed and related research issues for future work are dis...
Intelligent switching between different noise propagation algorithms: analysis and sensitivity
2012-08-10
When modeling aircraft noise on a large scale (such as an analysis of annual aircraft : operations at an airport), it is important that the noise propagation model used for the : analysis be both efficient and accurate. In this analysis, three differ...
How do people judge risks: availability heuristic, affect heuristic, or both?
Pachur, Thorsten; Hertwig, Ralph; Steinmann, Florian
2012-09-01
How does the public reckon which risks to be concerned about? The availability heuristic and the affect heuristic are key accounts of how laypeople judge risks. Yet, these two accounts have never been systematically tested against each other, nor have their predictive powers been examined across different measures of the public's risk perception. In two studies, we gauged risk perception in student samples by employing three measures (frequency, value of a statistical life, and perceived risk) and by using a homogeneous (cancer) and a classic set of heterogeneous causes of death. Based on these judgments of risk, we tested precise models of the availability heuristic and the affect heuristic and different definitions of availability and affect. Overall, availability-by-recall, a heuristic that exploits people's direct experience of occurrences of risks in their social network, conformed to people's responses best. We also found direct experience to carry a high degree of ecological validity (and one that clearly surpasses that of affective information). However, the relative impact of affective information (as compared to availability) proved more pronounced in value-of-a-statistical-life and perceived-risk judgments than in risk-frequency judgments. Encounters with risks in the media, in contrast, played a negligible role in people's judgments. Going beyond the assumption of exclusive reliance on either availability or affect, we also found evidence for mechanisms that combine both, either sequentially or in a composite fashion. We conclude with a discussion of policy implications of our results, including how to foster people's risk calibration and the success of education campaigns.
Combined Heuristic Attack Strategy on Complex Networks
Directory of Open Access Journals (Sweden)
Marek Šimon
2017-01-01
Full Text Available Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.
Analysis of Groundwater level Changes in Wells Sensitive to Earthquakes
Liu, C.; Lee, C.; Chia, Y.; Hsiao, C.; Kuo, K.
2009-12-01
Earthquake-related groundwater level changes have often been observed in many places in Taiwan which is located at the boundary between the Erasian plate and the Phillipine Sea plate. For instance, more than 160 monitoring wells stations recorded coseismic changes during the 1999 Chi-Chi earthquake. These stations, which consist of one to five wells of different depths, were installed in the coastal plain or hillsides. In this study, we analyze monitoring data from four well stations (Pingding, Chukou, Yuanlin and Donher) to investigate the sensitivity of well water level to earthquakes. The variation of groundwater level with natural and human factors, such as rainfall, barometric pressure, earth tides and pumping were studied to understand the background changes in these wells. We found various relations between the magnitude and the epicentral distance of earthquakes to the co-seismic groundwater level changes at different wells. The sensitivity of monitoring wells was estimated from the ratio of the number of co-seismic groundwater level changes to the number of large earthquakes during the recording period. Earthquake related co-seismic groundwater level changes may reflect the redistribution of crustal stress and strain. However, coseismic changes in multiple-well monitoring stations may vary with depth. Also, water level data from wells of higher sampling rate show more details in co-seismic and background changes. Therefore, high-resolution and high-frequency data are essential for future study of groundwater level changes in response to earthquakes or fault movement.
Iris Recognition for Partially Occluded Images: Methodology and Sensitivity Analysis
Directory of Open Access Journals (Sweden)
A. Poursaberi
2007-01-01
Full Text Available Accurate iris detection is a crucial part of an iris recognition system. One of the main issues in iris segmentation is coping with occlusion that happens due to eyelids and eyelashes. In the literature, some various methods have been suggested to solve the occlusion problem. In this paper, two different segmentations of iris are presented. In the first algorithm, a circle is located around the pupil with an appropriate diameter. The iris area encircled by the circular boundary is used for recognition purposes then. In the second method, again a circle is located around the pupil with a larger diameter. This time, however, only the lower part of the encircled iris area is utilized for individual recognition. Wavelet-based texture features are used in the process. Hamming and harmonic mean distance classifiers are exploited as a mixed classifier in suggested algorithm. It is observed that relying on a smaller but more reliable part of the iris, though reducing the net amount of information, improves the overall performance. Experimental results on CASIA database show that our method has a promising performance with an accuracy of 99.31%. The sensitivity of the proposed method is analyzed versus contrast, illumination, and noise as well, where lower sensitivity to all factors is observed when the lower half of the iris is used for recognition.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Analysis of the stability and sensitivity of jets in crossflow
Regan, Marc; Mahesh, Krishnan
2016-11-01
Jets in crossflow (transverse jets) are a canonical fluid flow in which a jet of fluid is injected normal to a crossflow. A high-fidelity, unstructured, incompressible, DNS solver is shown (Iyer & Mahesh 2016) to reproduce the complex shear layer instability seen in low-speed jets in crossflow experiments. Vertical velocity spectra taken along the shear layer show good agreement between simulation and experiment. An analogy to countercurrent mixing layers has been proposed to explain the transition from absolute to convective stability with increasing jet to crossflow ratios. Global linear stability and adjoint sensitivity techniques are developed within the unstructured DNS solver in an effort to further understand the stability and sensitivity of jets in crossflow. An Arnoldi iterative approach is used to solve for the most unstable eigenvalues and their associated eigenmodes for the direct and adjoint formulations. Frequencies from the direct and adjoint modal analyses show good agreement with simulation and experiment. Development, validation, and results for the transverse jet will be presented. Supported by AFOSR.
sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
8. REFERENCES. [1] Asta, G. “Life Cycle Cost Analysis of Asphalt and. Concrete Pavements” Thesis submitted to the School of Science and Engineering at Reykjavík University in impartial fulfillment of the requirements for the degree of Master of Science, Iceland. 2011. [2] Walls, J. I., & Smith, M. “Life-Cycle Cost Analysis in.
A Flow-Sensitive Analysis of Privacy Properties
DEFF Research Database (Denmark)
Nielson, Hanne Riis; Nielson, Flemming
2007-01-01
that information I send to some service never is leaked to another service? - unless I give my permission? We shall develop a static program analysis for the pi- calculus and show how it can be used to give privacy guarantees like the ones requested above. The analysis records the explicit information flow...
IMPACT OF HEURISTIC STRATEGIES ON PUPILS’ ATTITUDES TO PROBLEM SOLVING
Directory of Open Access Journals (Sweden)
NOVOTNÁ, Jarmila
2015-03-01
Full Text Available The paper is a sequel to the article (Novotná et al., 2014, where the authors present the results of a 4-month experiment whose main aim was to change pupils’ culture of problem solving by using heuristic strategies suitable for problem solving in mathematics education. (Novotná et al., 2014 focused on strategies Analogy, Guess – check – revise, Systematic experimentation, Problem reformulation, Solution drawing, Working backwards and Use of graphs of functions. This paper focuses on two other heuristic strategies convenient for improvement of pupils’ culture of problem solving: Introduction of an auxiliary element and Omitting a condition. In the first part, the strategies Guess – Check – Revise, Working backwards, Introduction of an auxiliary element and Omitting a condition are characterized in detail and illustrated by examples of their use in order to capture their characteristics. In the second part we focus on the newly introduced strategies and analyse work with them in lessons using the tools from (Novotná et al., 2014. The analysis of results of the experiment indicates that, unlike in case of the strategy Introduction of an auxiliary element, successful use of the strategy Omitting a condition requires longer teacher’s work with the pupils. The following analysis works with the strategy Systematic experimentation, which seemed to be the easiest to master in (Novotná et al., 2014; we focus on the dangers it bears when it is used by pupils. The conclusion from (Novotná et al., 2014, which showed that if pupils are introduced to an environment that supports their creativity, their attitude towards problem solving changes in a positive way already after the period of four months, is confirmed.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Using Tabu Search Heuristics in Solving the Vehicle Routing ...
African Journals Online (AJOL)
Nafiisah
according to Glover, is a meta-heuristic that guides a local heuristic to explore the solution space beyond local optimality. Tabu Search starts just as an ordinary local search, proceeding iteratively from one solution to the next until some stopping criteria is satisfied while making use of some strategies to avoid getting trapped ...
A genetic algorithm selection perturbative hyper-heuristic for solving ...
African Journals Online (AJOL)
The benefit of incorporating hill-climbing into operators for school timetabling is evident from previous research in this domain [3, 13, 15, 33].Versions of these ...... One of the disadvantages of hyper-heuristics is the higher runtimes as a result of having to construct a solution to evaluate each heuristic combination. This is ...
Efficient Heuristics for Simulating Population Overflow in Parallel Networks
Zaburnenko, T.S.; Nicola, V.F.
2006-01-01
In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overflow in networks of parallel queues. This heuristic approximates the “optimal��? state-dependent change of measure without the need for costly optimization involved in other
Heuristic methods for shared backup path protection planning
DEFF Research Database (Denmark)
Haahr, Jørgen Thorlund; Stidsen, Thomas Riis; Zachariasen, Martin
2012-01-01
present heuristic algorithms and lower bound methods for the SBPP planning problem. Experimental results show that the heuristic algorithms are able to find good quality solutions in minutes. A solution gap of less than 3.5% was achieved for more than half of the benchmark instances (and a gap of less...
Proximity search heuristics for wind farm optimal layout
DEFF Research Database (Denmark)
Fischetti, Martina; Monaci, Michele
2016-01-01
A heuristic framework for turbine layout optimization in a wind farm is proposed that combines ad-hoc heuristics and mixed-integer linear programming. In our framework, large-scale mixed-integer programming models are used to iteratively refine the current best solution according to the recently...
Heuristic Inquiry: A Personal Journey of Acculturation and Identity Reconstruction
Djuraskovic, Ivana; Arthur, Nancy
2010-01-01
Heuristic methodology attempts to discover the nature and meaning of phenomenon through internal self-search, exploration, and discovery. Heuristic methodology encourages the researcher to explore and pursue the creative journey that begins inside one's being and ultimately uncovers its direction and meaning through internal discovery (Douglass &…
Hyper-heuristics with low level parameter adaptation.
Ren, Zhilei; Jiang, He; Xuan, Jifeng; Luo, Zhongxuan
2012-01-01
Recent years have witnessed the great success of hyper-heuristics applying to numerous real-world applications. Hyper-heuristics raise the generality of search methodologies by manipulating a set of low level heuristics (LLHs) to solve problems, and aim to automate the algorithm design process. However, those LLHs are usually parameterized, which may contradict the domain independent motivation of hyper-heuristics. In this paper, we show how to automatically maintain low level parameters (LLPs) using a hyper-heuristic with LLP adaptation (AD-HH), and exemplify the feasibility of AD-HH by adaptively maintaining the LLPs for two hyper-heuristic models. Furthermore, aiming at tackling the search space expansion due to the LLP adaptation, we apply a heuristic space reduction (SAR) mechanism to improve the AD-HH framework. The integration of the LLP adaptation and the SAR mechanism is able to explore the heuristic space more effectively and efficiently. To evaluate the performance of the proposed algorithms, we choose the p-median problem as a case study. The empirical results show that with the adaptation of the LLPs and the SAR mechanism, the proposed algorithms are able to achieve competitive results over the three heterogeneous classes of benchmark instances.
Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.
Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique
2016-01-01
Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.
A genetic algorithm selection perturbative hyper-heuristic for solving ...
African Journals Online (AJOL)
Hyper-heuristics, on the other hand, search a heuristic space with the aim of providing a more generalized solution to the particular optimisation problem. This is a fairly new technique that has proven to be successful in solving various combinatorial optimisation problems. There has not been much research into the use of ...
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
On the empirical performance of (T,s,S) heuristics
Babai, M. Zied; Syntetos, Aris A.; Teunter, Ruud
2010-01-01
The periodic (T,s,S) policies have received considerable attention from the academic literature. Determination of the optimal parameters is computationally prohibitive, and a number of heuristic procedures have been put forward. However, these heuristics have never been compared in an extensive
Providing Automatic Support for Heuristic Rules of Methods
Tekinerdogan, B.; Aksit, Mehmet; Demeyer, Serge; Bosch, H.G.P.; Bosch, Jan
In method-based software development, software engineers create artifacts based on the heuristic rules of the adopted method. Most CASE tools, however, do not actively assist software engineers in applying the heuristic rules. To provide an active support, the rules must be formalized, implemented
Heuristic methods for single link shared backup path protection
DEFF Research Database (Denmark)
Haahr, Jørgen Thorlund; Stidsen, Thomas Riis; Zachariasen, Martin
2014-01-01
heuristic algorithms and lower bound methods for the SBPP planning problem. Experimental results show that the heuristic algorithms are able to find good quality solutions in minutes. A solution gap of less than 3.5 % was achieved for 5 of 7 benchmark instances (and a gap of less than 11 % for the remaining...
A Heuristic Hierarchical Scheme for Academic Search and Retrieval
DEFF Research Database (Denmark)
Amolochitis, Emmanouil; Christou, Ioannis T.; Tan, Zheng-Hua
2013-01-01
We present PubSearch, a hybrid heuristic scheme for re-ranking academic papers retrieved from standard digital libraries such as the ACM Portal. The scheme is based on the hierarchical combination of a custom implementation of the term frequency heuristic, a time-depreciated citation score...
Heuristic approach to the passive optical network with fibre duct ...
African Journals Online (AJOL)
Integer programming, network flow optimisation, passive optical network, ... algorithm before providing a greedy planning heuristic [11]. The multi- ... A wide range of meta-heuristics have also been employed to solve PONPP, with genetic ... In the case of PONPP, the objective is to find a subset of open facilities F, with every.
Heuristics: foundations for a novel approach to medical decision making.
Bodemer, Nicolai; Hanoch, Yaniv; Katsikopoulos, Konstantinos V
2015-03-01
Medical decision-making is a complex process that often takes place during uncertainty, that is, when knowledge, time, and resources are limited. How can we ensure good decisions? We present research on heuristics-simple rules of thumb-and discuss how medical decision-making can benefit from these tools. We challenge the common view that heuristics are only second-best solutions by showing that they can be more accurate, faster, and easier to apply in comparison to more complex strategies. Using the example of fast-and-frugal decision trees, we illustrate how heuristics can be studied and implemented in the medical context. Finally, we suggest how a heuristic-friendly culture supports the study and application of heuristics as complementary strategies to existing decision rules.
Gene expression analysis identifies global gene dosage sensitivity in cancer
DEFF Research Database (Denmark)
Fehrmann, Rudolf S. N.; Karjalainen, Juha M.; Krajewska, Malgorzata
2015-01-01
Many cancer-associated somatic copy number alterations (SCNAs) are known. Currently, one of the challenges is to identify the molecular downstream effects of these variants. Although several SCNAs are known to change gene expression levels, it is not clear whether each individual SCNA affects gene...... expression. We reanalyzed 77,840 expression profiles and observed a limited set of 'transcriptional components' that describe well-known biology, explain the vast majority of variation in gene expression and enable us to predict the biological function of genes. On correcting expression profiles...... for these components, we observed that the residual expression levels (in 'functional genomic mRNA' profiling) correlated strongly with copy number. DNA copy number correlated positively with expression levels for 99% of all abundantly expressed human genes, indicating global gene dosage sensitivity. By applying...
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Dryer, F.L.; Yetter, R.A. [Princeton Univ., NJ (United States)
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Parameters Sensitivity Analysis of Position-Based Impedance Control for Bionic Legged Robots’ HDU
Directory of Open Access Journals (Sweden)
Kaixian Ba
2017-10-01
Full Text Available For the hydraulic drive unit (HDU on the joints of bionic legged robots, this paper proposes the position-based impedance control method. Then, the impedance control performance is tested by a HDU performance test platform. Further, the method of first-order sensitivity matrix is proposed to analyze the dynamic sensitivity of four main control parameters under four working conditions. To research the parameter sensitivity quantificationally, two sensitivity indexes are defined, and the sensitivity analysis results are verified by experiments. The results of the experiments show that, when combined with corresponding optimization strategies, the dynamic compliance composition theory and the results from sensitivity analysis can compensate for the control parameters and optimize the control performance in different working conditions.
Energy Technology Data Exchange (ETDEWEB)
Spiessl, Sabine; Becker, Dirk-Alexander
2017-06-15
Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation
Carmichael, Marc G; Liu, Dikai
2015-01-01
Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.
Likhachev, D. V.
2017-11-01
In the field of optical metrology, the selection of the best model to fit experimental data is absolutely nontrivial problem. In practice, this is a very subjective and formidable task which highly depends on metrology expert opinion. In this paper, we propose a systematic approach to model selection in ellipsometric data analysis. We apply two well-established statistical methods for model selection, namely, the Akaike (AIC) and Bayesian (BIC) Information Criteria, to compare different dispersion models with various complexities and objectively determine the "best" one from a set of candidate models. The information criteria suggest the most optimal way to quantify the balance between goodness of fit and model complexity. In combination with screening-type parametric sensitivity analysis based on so-called "elementary effects" (the Morris method) this approach allows to compare and rate various models, identify key model parameters and significantly enhance process of ellipsometric measurements evaluation.
Sensitivity Analysis of the Bone Fracture Risk Model
Lewandowski, Beth; Myers, Jerry; Sibonga, Jean Diane
2017-01-01
Introduction: The probability of bone fracture during and after spaceflight is quantified to aid in mission planning, to determine required astronaut fitness standards and training requirements and to inform countermeasure research and design. Probability is quantified with a probabilistic modeling approach where distributions of model parameter values, instead of single deterministic values, capture the parameter variability within the astronaut population and fracture predictions are probability distributions with a mean value and an associated uncertainty. Because of this uncertainty, the model in its current state cannot discern an effect of countermeasures on fracture probability, for example between use and non-use of bisphosphonates or between spaceflight exercise performed with the Advanced Resistive Exercise Device (ARED) or on devices prior to installation of ARED on the International Space Station. This is thought to be due to the inability to measure key contributors to bone strength, for example, geometry and volumetric distributions of bone mass, with areal bone mineral density (BMD) measurement techniques. To further the applicability of model, we performed a parameter sensitivity study aimed at identifying those parameter uncertainties that most effect the model forecasts in order to determine what areas of the model needed enhancements for reducing uncertainty. Methods: The bone fracture risk model (BFxRM), originally published in (Nelson et al) is a probabilistic model that can assess the risk of astronaut bone fracture. This is accomplished by utilizing biomechanical models to assess the applied loads; utilizing models of spaceflight BMD loss in at-risk skeletal locations; quantifying bone strength through a relationship between areal BMD and bone failure load; and relating fracture risk index (FRI), the ratio of applied load to bone strength, to fracture probability. There are many factors associated with these calculations including
Sensitive KIT D816V mutation analysis of blood as a diagnostic test in mastocytosis
DEFF Research Database (Denmark)
Kielsgaard Kristensen, Thomas; Vestergaard, Hanne; Bindslev-Jensen, Carsten
2014-01-01
The recent progress in sensitive KIT D816V mutation analysis suggests that mutation analysis of peripheral blood (PB) represents a promising diagnostic test in mastocytosis. However, there is a need for systematic assessment of the analytical sensitivity and specificity of the approach in order...... to establish its value in clinical use. We therefore evaluated sensitive KIT D816V mutation analysis of PB as a diagnostic test in an entire case-series of adults with mastocytosis. We demonstrate for the first time that by using a sufficiently sensitive KIT D816V mutation analysis, it is possible to detect...... the mutation in PB in nearly all adult mastocytosis patients. The mutation was detected in PB in 78 of 83 systemic mastocytosis (94%) and 3 of 4 cutaneous mastocytosis patients (75%). The test was 100% specific as determined by analysis of clinically relevant control patients who all tested negative. Mutation...
Shotgun lipidomic analysis of chemically sulfated sterols compromises analytical sensitivity
DEFF Research Database (Denmark)
Casanovas, Albert; Hannibal-Bach, Hans Kristian; Jensen, Ole Nørregaard
2014-01-01
Shotgun lipidomics affords comprehensive and quantitative analysis of lipid species in cells and tissues at high-throughput [1 5]. The methodology is based on direct infusion of lipid extracts by electrospray ionization (ESI) combined with tandem mass spectrometry (MS/MS) and/or high resolution F...... low ionization efficiency in ESI [7]. For this reason, chemical derivatization procedures including acetylation [8] or sulfation [9] are commonly implemented to facilitate ionization, detection and quantification of sterols for global lipidome analysis [1-3, 10]....
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
Energy Technology Data Exchange (ETDEWEB)
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
Zhang, Hong-Xuan; Goutsias, John
2011-03-21
Sensitivity analysis is a valuable task for assessing the effects of biological variability on cellular behavior. Available techniques require knowledge of nominal parameter values, which cannot be determined accurately due to experimental uncertainty typical to problems of systems biology. As a consequence, the practical use of existing sensitivity analysis techniques may be seriously hampered by the effects of unpredictable experimental variability. To address this problem, we propose here a probabilistic approach to sensitivity analysis of biochemical reaction systems that explicitly models experimental variability and effectively reduces the impact of this type of uncertainty on the results. The proposed approach employs a recently introduced variance-based method to sensitivity analysis of biochemical reaction systems [Zhang et al., J. Chem. Phys. 134, 094101 (2009)] and leads to a technique that can be effectively used to accommodate appreciable levels of experimental variability. We discuss three numerical techniques for evaluating the sensitivity indices associated with the new method, which include Monte Carlo estimation, derivative approximation, and dimensionality reduction based on orthonormal Hermite approximation. By employing a computational model of the epidermal growth factor receptor signaling pathway, we demonstrate that the proposed technique can greatly reduce the effect of experimental variability on variance-based sensitivity analysis results. We expect that, in cases of appreciable experimental variability, the new method can lead to substantial improvements over existing sensitivity analysis techniques.
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
Turbine blade temperature calculation and life estimation - a sensitivity analysis
Directory of Open Access Journals (Sweden)
Majid Rezazadeh Reyhani
2013-06-01
Full Text Available The overall operating cost of the modern gas turbines is greatly influenced by the durability of hot section components operating at high temperatures. In turbine operating conditions, some defects may occur which can decrease hot section life. In the present paper, methods used for calculating blade temperature and life are demonstrated and validated. Using these methods, a set of sensitivity analyses on the parameters affecting temperature and life of a high pressure, high temperature turbine first stage blade is carried out. Investigated uncertainties are: (1 blade coating thickness, (2 coolant inlet pressure and temperature (as a result of secondary air system, and (3 gas turbine load variation. Results show that increasing thermal barrier coating thickness by 3 times, leads to rise in the blade life by 9 times. In addition, considering inlet cooling temperature and pressure, deviation in temperature has greater effect on blade life. One of the interesting points that can be realized from the results is that 300 hours operation at 70% load can be equal to one hour operation at base load.
A simple, sensitive graphical method of treating thermogravimetric analysis data
Abraham Broido
1969-01-01
Thermogravimetric Analysis (TGA) is finding increasing utility in investigations of the pyrolysis and combustion behavior of materuals. Although a theoretical treatment of the TGA behavior of an idealized reaction is relatively straight-forward, major complications can be introduced when the reactions are complex, e.g., in the pyrolysis of cellulose, and when...
Sensitivity based reduced approaches for structural reliability analysis
Indian Academy of Sciences (India)
captured by a safety-factor based approach due to the intricate nonlinear relationships between the system parameters and the natural frequencies. For these reasons a scientific and systematic approach is required to predict the probability of failure of a structure at the design stage. Probabilistic structural reliability analysis ...
Sensitivity Analysis of Down Woody Material Data Processing Routines
Christopher W. Woodall; Duncan C. Lutes
2005-01-01
Weight per unit area (load) estimates of Down Woody Material (DWM) are the most common requests by users of the USDA Forest Service's Forest Inventory and Analysis (FIA) program's DWM inventory. Estimating of DWM loads requires the uniform compilation of DWM transect data for the entire United States. DWM weights may vary by species, level of decay, woody...
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems. The method is applied to a half car with a two...
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2013-01-01
applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems. The method is applied to a half car with a two...
Heuristic Kalman algorithm for solving optimization problems.
Toscano, Rosario; Lyonnet, Patrick
2009-10-01
The main objective of this paper is to present a new optimization approach, which we call heuristic Kalman algorithm (HKA). We propose it as a viable approach for solving continuous nonconvex optimization problems. The principle of the proposed approach is to consider explicitly the optimization problem as a measurement process designed to produce an estimate of the optimum. A specific procedure, based on the Kalman method, was developed to improve the quality of the estimate obtained through the measurement process. The efficiency of HKA is evaluated in detail through several nonconvex test problems, both in the unconstrained and constrained cases. The results are then compared to those obtained via other metaheuristics. These various numerical experiments show that the HKA has very interesting potentialities for solving nonconvex optimization problems, notably concerning the computation time and the success ratio.
A Heuristic for Improving Transmedia Exhibition Experience
DEFF Research Database (Denmark)
Selvadurai, Vashanth; Rosenstand, Claus Andreas Foss
2017-01-01
The area of interest is transmedia experiences in exhibitions. The research question is: How to involve visitors in a transmedia experience for an existing exhibition, which bridges the pre-, during- and post-experience? Research through design, and action research are the methods used to design...... and reflect on a transmedia experience for an existing exhibition. This is framed with literature about exhibitions and transmedia, and analyzed with quantitative data from a case-study of visitors in the exhibition; this is organizationally contextualized. The contribution covers a significant gap...... in the scientific field of designing transmedia experience in an exhibition context that links the pre- and post-activities to the actual visit (during-activities). The result of this study is a preliminary heuristic for establishing a relation between the platform and content complexity in transmedia exhibitions....
Which preventive interventions effectively enhance depressed mothers' sensitivity? A meta-analysis
Kersten, L.E.; Hosman, C.M.H.; Riksen-Walraven, J.M.A.; Doesum, K.T.M. van; Hoefnagels, C.C.J.
2011-01-01
Improving depressed mothers' sensitivity is assumed to be a key element in preventing adverse outcomes for children of such mothers. This meta-analysis examines the short-term effectiveness of preventive interventions in terms of enhancing depressed mothers' sensitivity toward their child and
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Sensitivity analysis explains quasi-one-dimensional current transport in two-dimensional materials
DEFF Research Database (Denmark)
Boll, Mads; Lotz, Mikkel Rønne; Hansen, Ole
2014-01-01
. The sensitivity analysis presents a formal definition of quasi-1D current transport, which was recently observed experimentally in chemical-vapor-deposition graphene. Our numerical model for calculating sensitivity is verified by comparing the model to analytical calculations based on conformal mapping...
Shull, Forrest; Seaman, Carolyn; Feldman, Raimund; Haingaertner, Ralf; Regardie, Myrna
2008-01-01
In 2008, we have continued analyzing the inspection data in an effort to better understand the applicability and effect of the inspection heuristics on inspection outcomes. Our research goals during this period are: 1. Investigate the effect of anomalies in the dataset (e.g. the very large meeting length values for some inspections) on our results 2. Investigate the effect of the heuristics on other inspection outcome variables (e.g. effort) 3. Investigate whether the recommended ranges can be modified to give inspection planners more flexibility without sacrificing effectiveness 4. Investigate possible refinements or modifications to the heuristics for specific subdomains (partitioned, e.g., by size, domain, or Center) This memo reports our results to date towards addressing these goals. In the next section, the first goal is addressed by describing the types of anomalies we have found in our dataset, how we have addressed them, and the effect of these changes on our previously reported results. In the following section, on "methodology", we describe the analyses we have conducted to address the other three goals and the results of these analyses are described in the "results" section. Finally, we conclude with future plans for continuing our investigation.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Development of Ultra-sensitive Laser Spectroscopic Analysis Technology
Energy Technology Data Exchange (ETDEWEB)
Cha, H. K.; Kim, D. H.; Song, K. S. (and others)
2007-04-15
Laser spectroscopic analysis technology has three distinct merits in detecting various nuclides found in nuclear fields. High selectivity originated from small bandwidth of tunable lasers makes it possible to distinguish various kinds of isotopes and isomers. High intensity of focused laser beam makes it possible to analyze ultratrace amount. Remote delivery of laser beam improves safety of workers who are exposed in dangerous environment. Also it can be applied to remote sensing of environment pollution.
Sensitivity Analysis and Insights into Hydrological Processes and Uncertainty at Different Scales
Haghnegahdar, A.; Razavi, S.; Wheater, H. S.; Gupta, H. V.
2015-12-01
Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, and conducting model calibration and uncertainty assessment. Numerous techniques have been used in environmental modelling studies for sensitivity analysis. However, it is often overlooked that the scale of modelling study, and the metric choice can significantly change the assessment of model sensitivity and uncertainty. In order to identify important hydrological processes across various scales, we conducted a multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three different hydrological models, HydroGeoSphere (HGS), Soil and Water Assessment Tool (SWAT), and Modélisation Environmentale-Surface et Hydrologie (MESH). Models were applied at various scales ranging from small (hillslope) to large (watershed) scales. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics such as high flows, low flows, and volume. We demonstrate how the scale of the case study and the choice of sensitivity metric(s) can change our assessment of sensitivity and uncertainty. We present some guidelines to better align the metric choice with the objective and scale of a modelling study.
A Simplified Matrix Formulation for Sensitivity Analysis of Hidden Markov Models
Directory of Open Access Journals (Sweden)
Seifemichael B. Amsalu
2017-08-01
Full Text Available In this paper, a new algorithm for sensitivity analysis of discrete hidden Markov models (HMMs is proposed. Sensitivity analysis is a general technique for investigating the robustness of the output of a system model. Sensitivity analysis of probabilistic networks has recently been studied extensively. This has resulted in the development of mathematical relations between a parameter and an output probability of interest and also methods for establishing the effects of parameter variations on decisions. Sensitivity analysis in HMMs has usually been performed by taking small perturbations in parameter values and re-computing the output probability of interest. As recent studies show, the sensitivity analysis of an HMM can be performed using a functional relationship that describes how an output probability varies as the network’s parameters of interest change. To derive this sensitivity function, existing Bayesian network algorithms have been employed for HMMs. These algorithms are computationally inefficient as the length of the observation sequence and the number of parameters increases. In this study, a simplified efficient matrix-based algorithm for computing the coefficients of the sensitivity function for all hidden states and all time steps is proposed and an example is presented.
Directory of Open Access Journals (Sweden)
Skvortsova Svetlana Valerevna
2013-04-01
Full Text Available Purpose: identify key elements of employing heuristic teaching methods in foreign theory of education. Methodology: a theoretical analysis of psychological and pedagogical literature on the issues of teaching creativity. Results: on the basis of theoretical analysis the main criteria of heuristic teaching methods are revealed: repeatability; absence of a single decision; need for group discussions; flexibility; wide variability; promotion of creative teaching; influence on the character of knowledge and relationships between the teacher and the students. Heuristic teaching methods have been divided into: methods of working with information, methods of working with problems, methods of research and methods of analyzing the decisions. The main characteristics of each group of methods are presented and certain recommendations for applying them in teaching creativity are given. Practical implications: the system of education.
The application of sensitivity analysis to models of large scale physiological systems
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
DEFF Research Database (Denmark)
Prunescu, Remus Mihail; Sin, Gürkan
2014-01-01
This study presents the uncertainty and sensitivity analysis of a lignocellulosic enzymatic hydrolysis model considering both model and feed parameters as sources of uncertainty. The dynamic model is parametrized for accommodating various types of biomass, and different enzymatic complexes...
Janssen, R.; Rietveld, P.
1989-01-01
Inclusion of evaluation methods in decision support systems gives way to extensive sensitivity analysis. In this article new methods for sensitivityanalysis are developed and applied to the siting of nuclear power plants in the Netherlands.
Cognitive Load During Route Selection Increases Reliance on Spatial Heuristics.
Brunyé, Tad T; Martis, Shaina B; Taylor, Holly A
2017-03-22
Planning routes from maps involves perceiving the symbolic environment, identifying alternate routes, and applying explicit strategies and implicit heuristics to select an option. Two implicit heuristics have received considerable attention, the southern route preference and initial segment strategy. The current study tested a prediction from decision making theory, that increasing cognitive load during route planning will increase reliance on these heuristics. In two experiments, participants planned routes while under conditions of minimal (0-back) or high (2-back) working memory load. In Experiment 1, we examined how memory load impacts the southern route heuristic. In Experiment 2, we examined how memory load impacts the initial segment heuristic. Results replicated earlier results demonstrating a southern route preference (Experiment 1) and initial segment strategy (Experiment 2), and further demonstrated that evidence for heuristic reliance is more likely under conditions of concurrent working memory load. Furthermore, the extent to which participants maintained efficient route selection latencies in the 2-back condition predicted the magnitude of this effect. Together, results demonstrate that working memory load increases the application of heuristics during spatial decision making, particularly when participants attempt to maintain quick decisions while managing concurrent task demands.
The Recognition Heuristic: A Review of Theory and Tests
Directory of Open Access Journals (Sweden)
Thorsten ePachur
2011-07-01
Full Text Available The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a that recognition is often an ecologically valid cue; (b that people often follow recognition when making inferences; (c that recognition supersedes further cue knowledge; (d that its use can produce the less-is-more effect—the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference.
The Recognition Heuristic: A Review of Theory and Tests
Pachur, Thorsten; Todd, Peter M.; Gigerenzer, Gerd; Schooler, Lael J.; Goldstein, Daniel G.
2011-01-01
The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect – the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference). PMID:21779266
Reliability and Sensitivity Analysis of Cast Iron Water Pipes for Agricultural Food Irrigation
Yanling Ni
2014-01-01
This study aims to investigate the reliability and sensitivity of cast iron water pipes for agricultural food irrigation. The Monte Carlo simulation method is used for fracture assessment and reliability analysis of cast iron pipes for agricultural food irrigation. Fracture toughness is considered as a limit state function for corrosion affected cast iron pipes. Then the influence of failure mode on the probability of pipe failure has been discussed. Sensitivity analysis also is carried out t...
Directory of Open Access Journals (Sweden)
Emre Sert
2017-06-01
In summary, within the scope of this work, unlike the previous studies, experiments involving physical tests (i.e. tilt table, fishhook and cornering and numerical calculations are included. In addition, verification of the virtual model, parametric sensitivity analysis and the comparison of the virtual test and the physical test is performed. Because of the vigorous verification, sensitivity analysis and validation process, the results can be more reliable compared to previous studies.
Sensitivity analysis of FDS 6 results for nuclear power plants
Energy Technology Data Exchange (ETDEWEB)
Alvear, Daniel; Puente, Eduardo; Abreu, Orlando [Cantabria Univ., Santander (Spain). Group GIDAI - Fire Safety-Research and Technology; Peco, Julian [Consejo de Seguridad Nuclear, Madrid (Spain)
2015-12-15
The Spanish standard ''Instruction IS-30, Rev. 1'' (February 21, 2013) allows the new approaches of risk informed performance based design (PBD) The Spanish standard ''Instruction IS-30, rev. 1'' (February 21, 2013) for demonstrating the safe shutdown capability in case of fire in nuclear power plants. In this sense, fire computer models have become an interesting tool to study real fire scenarios. Such models use a set of input parameters that define the features of the physical domain, material, radiation, turbulence, etc. This paper analyses the impact of the grid size and different sub-models of the fire simulation code FDS, version 6 with the objective to evaluate and define their relative weight in the final simulation results. For the grid size analysis, two different scale scenarios were selected, the bench scale test PENLIGHT and a large-scale test similar to Appendix B of NUREG - 1934 (17 m x 10 m x 4.6 m, with an ignition source of 2 MW and 16 cable trays). For the sub-model analysis, the PRS-INT4 real scale configuration of the INTEGRAL experimental campaign of the international OECD PRISME Project has been used. The results offer relevant data for users and show the critical parameters that must be selected properly to guarantee the quality of the simulations.
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Im, Hyungbin; Bae, Dae Sung; Chung, Jintai
2012-04-01
This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
A quantum heuristic algorithm for the traveling salesman problem
Bang, Jeongho; Ryu, Junghee; Lee, Changhyoup; Yoo, Seokwon; Lim, James; Lee, Jinhyoung
2012-12-01
We propose a quantum heuristic algorithm to solve the traveling salesman problem by generalizing the Grover search. Sufficient conditions are derived to greatly enhance the probability of finding the tours with the cheapest costs reaching almost to unity. These conditions are characterized by the statistical properties of tour costs and are shown to be automatically satisfied in the large-number limit of cities. In particular for a continuous distribution of the tours along the cost, we show that the quantum heuristic algorithm exhibits a quadratic speedup compared to its classical heuristic algorithm.
Solving Large Clustering Problems with Meta-Heuristic Search
DEFF Research Database (Denmark)
Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen
In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...... problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta-heuristic...
On the Importance of Elimination Heuristics in Lazy Propagation
DEFF Research Database (Denmark)
Madsen, Anders Læsø; Butz, Cory J.
2012-01-01
elimination orders on-line. This paper considers the importance of elimination heuristics in LP when using Variable Elimination (VE) as the message and single marginal computation algorithm. It considers well-known cost measures for selecting the next variable to eliminate and a new cost measure....... The empirical evaluation examines dierent heuristics as well as sequences of cost measures, and was conducted on real-world and randomly generated Bayesian networks. The results show that for most cases performance is robust relative to the cost measure used and in some cases the elimination heuristic can have...
Motor heuristics and embodied choices: how to choose and act.
Raab, Markus
2017-08-01
Human performance requires choosing what to do and how to do it. The goal of this theoretical contribution is to advance understanding of how the motor and cognitive components of choices are intertwined. From a holistic perspective I extend simple heuristics that have been tested in cognitive tasks to motor tasks, coining the term motor heuristics. Similarly I extend the concept of embodied cognition, that has been tested in simple sensorimotor processes changing decisions, to complex sport behavior coining the term embodied choices. Thus both motor heuristics and embodied choices explain complex behavior such as studied in sport and exercise psychology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Heuristic Portfolio Trading Rules with Capital Gain Taxes
DEFF Research Database (Denmark)
Fischer, Marcel; Gallmeyer, Michael
2016-01-01
, no trading strategy can outperform a 1/N trading strategy augmented with a tax heuristic, not even the most tax and transaction cost-efficient buy-and-hold strategy. Overall, the best strategy is 1/N augmented with a heuristic that allows for a fixed deviation in absolute portfolio weights. Our results thus......We study the out-of-sample performance of portfolio trading strategies used when an investor faces capital gain taxation and proportional transaction costs. Overlaying simple tax trading heuristics on trading strategies improves out-of-sample performance. For medium to large transaction costs...... show that the best trading strategies balance diversification considerations and tax considerations....
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Abusam, A.A.A.; Keesman, K.J.; Straten, van G.; Spanjers, H.; Meinema, K.
2001-01-01
This paper demonstrates the application of the factorial sensitivity analysis methodology in studying the influence of variations in stoichiometric, kinetic and operating parameters on the performance indices of an oxidation ditch simulation model (benchmark). Factorial sensitivity analysis
DEFF Research Database (Denmark)
Person, O.; Daalhuizen, Jaap; Gattol, V.
2013-01-01
In the present paper, we study the reported use of systematic and heuristic methods for 304 students enrolled in a master-level course on design theory and methodology. What to teach design and engineering students about methods is an important topic for discussion. One reason for this is that th......In the present paper, we study the reported use of systematic and heuristic methods for 304 students enrolled in a master-level course on design theory and methodology. What to teach design and engineering students about methods is an important topic for discussion. One reason...... for this is that the experiences of design educators when using methods in their teaching do not always sit well with how methods are portrayed in the literature. Based on self-reports of the students, we study the use of systematic and heuristic methods for the five activities in the basic design cycle: (1) analysis, (2......) synthesis, (3) simulation, (4) evaluation, and (5) decision-making. The results of our study suggest that systematic and heuristic methods fulfil different roles for the students when designing. The students reported to use heuristic methods significantly more for synthesis, while they reported to use...
Bashiri, Mahdi; Karimi, Hossein
2012-07-01
Quadratic assignment problem (QAP) is a well-known problem in the facility location and layout. It belongs to the NP-complete class. There are many heuristic and meta-heuristic methods, which are presented for QAP in the literature. In this paper, we applied 2-opt, greedy 2-opt, 3-opt, greedy 3-opt, and VNZ as heuristic methods and tabu search (TS), simulated annealing, and particle swarm optimization as meta-heuristic methods for the QAP. This research is dedicated to compare the relative percentage deviation of these solution qualities from the best known solution which is introduced in QAPLIB. Furthermore, a tuning method is applied for meta-heuristic parameters. Results indicate that TS is the best in 31%of QAPs, and the IFLS method, which is in the literature, is the best in 58 % of QAPs; these two methods are the same in 11 % of test problems. Also, TS has a better computational time among heuristic and meta-heuristic methods.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model
Energy Technology Data Exchange (ETDEWEB)
Scott, M.; Green, P.L. [Dynamics Research Group, Department of Mechanical Engineering, University of Sheffield, Mappin Street, Sheffield S1 3JD (United Kingdom); O’Driscoll, D. [EDF Energy, Barnett Way, Barnwood, Gloucester GL4 3RS (United Kingdom); Worden, K.; Sims, N.D. [Dynamics Research Group, Department of Mechanical Engineering, University of Sheffield, Mappin Street, Sheffield S1 3JD (United Kingdom)
2016-08-15
Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.
Hanley, James A
2008-01-01
Most survival analysis textbooks explain how the hazard ratio parameters in Cox's life table regression model are estimated. Fewer explain how the components of the nonparametric baseline survivor function are derived. Those that do often relegate the explanation to an "advanced" section and merely present the components as algebraic or iterative solutions to estimating equations. None comment on the structure of these estimators. This note brings out a heuristic representation that may help to de-mystify the structure.
Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences
Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid
2017-01-01
Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564
[Local sensitivity and its stationarity analysis for urban rainfall runoff modelling].
Lin, Jie; Huang, Jin-Liang; Du, Peng-Fei; Tu, Zhen-Shun; Li, Qing-Sheng
2010-09-01
Sensitivity analysis of urban-runoff simulation is a crucial procedure for parameter identification and uncertainty analysis. Local sensitivity analysis using Morris screening method was carried out for urban rainfall runoff modelling based on Storm Water Management Model (SWMM). The results showed that Area, % Imperv and Dstore-Imperv are the most sensitive parameters for both total runoff volume and peak flow. Concerning total runoff volume, the sensitive indices of Area, % Imperv and Dstore-Imperv were 0.46-1.0, 0.61-1.0, -0.050(-) - 5.9, respectively; while with respect to peak runoff, they were 0.48-0.89, 0.59-0.83, 0(-) -9.6, respectively. In comparison, the most sensitive indices (Morris) for all parameters with regard to total runoff volume and peak flow appeared in the rainfall event with least rainfall; and less sensitive indices happened in the rainfall events with heavier rainfall. Furthermore, there is considerable variability in sensitive indices for each rainfall event. % Zero-Imperv's coefficient variations have the largest values among all parameters for total runoff volume and peak flow, namely 221.24% and 228.10%. On the contrary, the coefficient variations of conductivity among all parameters for both total runoff volume and peak flow are the smallest, namely 0.
Economic tour package model using heuristic
Rahman, Syariza Abdul; Benjamin, Aida Mauziah; Bakar, Engku Muhammad Nazri Engku Abu
2014-07-01
A tour-package is a prearranged tour that includes products and services such as food, activities, accommodation, and transportation, which are sold at a single price. Since the competitiveness within tourism industry is very high, many of the tour agents try to provide attractive tour-packages in order to meet tourist satisfaction as much as possible. Some of the criteria that are considered by the tourist are the number of places to be visited and the cost of the tour-packages. Previous studies indicate that tourists tend to choose economical tour-packages and aiming to visit as many places as they can cover. Thus, this study proposed tour-package model using heuristic approach. The aim is to find economical tour-packages and at the same time to propose as many places as possible to be visited by tourist in a given geographical area particularly in Langkawi Island. The proposed model considers only one starting point where the tour starts and ends at an identified hotel. This study covers 31 most attractive places in Langkawi Island from various categories of tourist attractions. Besides, the allocation of period for lunch and dinner are included in the proposed itineraries where it covers 11 popular restaurants around Langkawi Island. In developing the itinerary, the proposed heuristic approach considers time window for each site (hotel/restaurant/place) so that it represents real world implementation. We present three itineraries with different time constraints (1-day, 2-day and 3-day tour-package). The aim of economic model is to minimize the tour-package cost as much as possible by considering entrance fee of each visited place. We compare the proposed model with our uneconomic model from our previous study. The uneconomic model has no limitation to the cost with the aim to maximize the number of places to be visited. Comparison between the uneconomic and economic itinerary has shown that the proposed model have successfully achieved the objective that
Superiorization: an optimization heuristic for medical physics.
Herman, Gabor T; Garduno, Edgar; Davidi, Ran; Censor, Yair
2012-09-01
To describe and mathematically validate the superiorization methodology, which is a recently developed heuristic approach to optimization, and to discuss its applicability to medical physics problem formulations that specify the desired solution (of physically given or otherwise obtained constraints) by an optimization criterion. The superiorization methodology is presented as a heuristic solver for a large class of constrained optimization problems. The constraints come from the desire to produce a solution that is constraints-compatible, in the sense of meeting requirements provided by physically or otherwise obtained constraints. The underlying idea is that many iterative algorithms for finding such a solution are perturbation resilient in the sense that, even if certain kinds of changes are made at the end of each iterative step, the algorithm still produces a constraints-compatible solution. This property is exploited by using permitted changes to steer the algorithm to a solution that is not only constraints-compatible, but is also desirable according to a specified optimization criterion. The approach is very general, it is applicable to many iterative procedures and optimization criteria used in medical physics. The main practical contribution is a procedure for automatically producing from any given iterative algorithm its superiorized version, which will supply solutions that are superior according to a given optimization criterion. It is shown that if the original iterative algorithm satisfies certain mathematical conditions, then the output of its superiorized version is guaranteed to be as constraints-compatible as the output of the original algorithm, but it is superior to the latter according to the optimization criterion. This intuitive description is made precise in the paper and the stated claims are rigorously proved. Superiorization is illustrated on simulated computerized tomography data of a head cross section and, in spite of its generality
Zhang, Ruihua; Guo, Hongwei; Asundi, Anand K
2016-09-20
In fringe projection profilometry, phase sensitivity is one of the important factors affecting measurement accuracy. A typical fringe projection system consists of one camera and one projector. To gain insight into its phase sensitivity, we perform in this paper a strict analysis in theory about the dependence of phase sensitivities on fringe directions. We use epipolar geometry as a tool to derive the relationship between fringe distortions and depth variations of the measured surface, and further formularize phase sensitivity as a function of the angle between fringe direction and the epipolar line. The results reveal that using the fringes perpendicular to the epipolar lines enables us to achieve the maximum phase sensitivities, whereas if the fringes have directions along the epipolar lines, the phase sensitivities decline to zero. Based on these results, we suggest the optimal fringes being circular-arc-shaped and centered at the epipole, which enables us to give the best phase sensitivities over the whole fringe pattern, and the quasi-optimal fringes, being straight and perpendicular to the connecting line between the fringe pattern center and the epipole, can achieve satisfyingly high phase sensitivities over whole fringe patterns in the situation that the epipole locates far away from the fringe pattern center. The experimental results demonstrate that our analyses are practical and correct, and that our optimized fringes are effective in improving the phase sensitivities and, further, the measurement accuracies.
Monkman, Helen; Griffith, Janessa; Kushniruk, Andre W
2015-01-01
Heuristic evaluations have proven to be valuable for identifying usability issues in systems. Commonly used sets of heuritics exist; however, they may not always be the most suitable, given the specific goal of the analysis. One such example is seeking to evaluate the demands on eHealth literacy and usability of consumer health information systems. In this study, eight essential heuristics and three optional heuristics subsumed from the evidence on eHealth/health literacy and usability were tested for their utility in assessing a mobile blood pressure tracking application (app). This evaluation revealed a variety of ways the design of the app could both benefit and impede users with limited eHealth literacy. This study demonstrated the utility of a low-cost, single evaluation approach for identifying both eHealth literacy and usability issues based on existing evidence in the literature.
Deterministic oscillatory search: a new meta-heuristic optimization ...
Indian Academy of Sciences (India)
heuristic optimization; power system problem. Abstract. The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical ...
Adapting the Locales Framework for Heuristic Evaluation of Groupware
Directory of Open Access Journals (Sweden)
Saul Greenberg
2000-05-01
Full Text Available Heuristic evaluation is a rapid, cheap and effective way for identifying usability problems in single user systems. However, current heuristics do not provide guidance for discovering problems specific to groupware usability. In this paper, we take the Locales Framework and restate it as heuristics appropriate for evaluating groupware. These are: 1 Provide locales; 2 Provide awareness within locales; 3 Allow individual views; 4 Allow people to manage and stay aware of their evolving interactions; and 5 Provide a way to organize and relate locales to one another. To see if these new heuristics are useful in practice, we used them to inspect the interface of Teamwave Workplace, a commercial groupware product. We were successful in identifying the strengths of Teamwave as well as both major and minor interface problems.
Heuristic Method for Decision-Making in Common Scheduling Problems
National Research Council Canada - National Science Library
Edyta Kucharska
2017-01-01
The aim of the paper is to present a heuristic method for decision-making regarding an NP-hard scheduling problem with limitations related to tasks and the resources dependent on the current state of the process...
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
Expanding the Possibilities of AIS Data with Heuristics
Directory of Open Access Journals (Sweden)
Bjørnar Brende Smestad
2017-06-01
Full Text Available Automatic Identification System (AIS is primarily used as a tracking system for ships, but with the launch of satellites to collect these data, new and previously untested possibilities are emerging. This paper presents the development of heuristics for establishing the specific ship type using information retrieved from AIS data alone. These heuristics expand the possibilities of AIS data, as the specific ship type is vital for several transportation research cases, such as emission analyses of ship traffic and studies on slow steaming. The presented method for developing heuristics can be used for a wider range of vessels. These heuristics may form the basis of large-scale studies on ship traffic using AIS data when it is not feasible or desirable to use commercial ship data registers.
Impact of heuristics in clustering large biological networks.
Shafin, Md Kishwar; Kabir, Kazi Lutful; Ridwan, Iffatur; Anannya, Tasmiah Tamzid; Karim, Rashid Saadman; Hoque, Mohammad Mozammel; Rahman, M Sohel
2015-12-01
Traditional clustering algorithms often exhibit poor performance for large networks. On the contrary, greedy algorithms are found to be relatively efficient while uncovering functional modules from large biological networks. The quality of the clusters produced by these greedy techniques largely depends on the underlying heuristics employed. Different heuristics based on different attributes and properties perform differently in terms of the quality of the clusters produced. This motivates us to design new heuristics for clustering large networks. In this paper, we have proposed two new heuristics and analyzed the performance thereof after incorporating those with three different combinations in a recently celebrated greedy clustering algorithm named SPICi. We have extensively analyzed the effectiveness of these new variants. The results are found to be promising. Copyright © 2015 Elsevier Ltd. All rights reserved.
Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.
Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán
2014-03-11
While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.
The Priority Heuristic: Making Choices Without Trade-Offs
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2010-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767
Heuristic Portfolio Trading Rules with Capital Gain Taxes
DEFF Research Database (Denmark)
Fischer, Marcel; Gallmeyer, Michael
outperform a 1/N trading strategy augmented with a tax heuristic, not even the most tax- and transaction-cost efficient buy-and-hold strategy. Overall, the best strategy is 1/N augmented with a heuristic that allows for a fixed deviation in absolute portfolio weights. Our results show that the best trading...... strategy is not dominated out-of-sample by a variety of optimizing trading strategies, except the parametric portfolios of Brandt, Santa-Clara, and Valkanov (2009). With dividend and realization-based capital gain taxes, the welfare costs of the taxes are large with the cost being as large as 30% of wealth...... in some cases. Overlaying simple tax trading heuristics on these trading strategies improves out-of-sample performance. In particular, the 1/N trading strategy's welfare gains improve when a variety of tax trading heuristics are also imposed. For medium to large transaction costs, no trading strategy can...
Heuristic and algorithmic processing in English, mathematics, and science education.
Sharps, Matthew J; Hess, Adam B; Price-Sharps, Jana L; Teh, Jane
2008-01-01
Many college students experience difficulties in basic academic skills. Recent research suggests that much of this difficulty may lie in heuristic competency--the ability to use and successfully manage general cognitive strategies. In the present study, the authors evaluated this possibility. They compared participants' performance on a practice California Basic Educational Skills Test and on a series of questions in the natural sciences with heuristic and algorithmic performance on a series of mathematics and reading comprehension exercises. Heuristic competency in mathematics was associated with better scores in science and mathematics. Verbal and algorithmic skills were associated with better reading comprehension. These results indicate the importance of including heuristic training in educational contexts and highlight the importance of a relatively domain-specific approach to questions of cognition in higher education.
Neural basis of scientific innovation induced by heuristic prototype.
Luo, Junlong; Li, Wenfu; Qiu, Jiang; Wei, Dongtao; Liu, Yijun; Zhang, Qinlin
2013-01-01
A number of major inventions in history have been based on bionic imitation. Heuristics, by applying biological systems to the creation of artificial devices and machines, might be one of the most critical processes in scientific innovation. In particular, prototype heuristics propositions that innovation may engage automatic activation of a prototype such as a biological system to form novel associations between a prototype's function and problem-solving. We speculated that the cortical dissociation between the automatic activation and forming novel associations in innovation is critical point to heuristic creativity. In the present study, novel and old scientific innovations (NSI and OSI) were selected as experimental materials in using learning-testing paradigm to explore the neural basis of scientific innovation induced by heuristic prototype. College students were required to resolve NSI problems (to which they did not know the answers) and OSI problems (to which they knew the answers). From two fMRI experiments, our results showed that the subjects could resolve NSI when provided with heuristic prototypes. In Experiment 1, it was found that the lingual gyrus (LG; BA18) might be related to prototype heuristics in college students resolving NSI after learning a relative prototype. In Experiment 2, the LG (BA18) and precuneus (BA31) were significantly activated for NSI compared to OSI when college students learned all prototypes one day before the test. In addition, the mean beta-values of these brain regions of NSI were all correlated with the behavior accuracy of NSI. As our hypothesis indicated, the findings suggested that the LG might be involved in forming novel associations using heuristic information, while the precuneus might be involved in the automatic activation of heuristic prototype during scientific innovation.
Heuristic Evaluation of eNote: an Electronic Notes System
Bright, Tiffani J.; Bakken, Suzanne; Johnson, Stephen B
2006-01-01
eNote is an electronic health record (EHR) system based on semi-structured narrative documents. A heuristic evaluation was conducted with a sample of five usability experts. eNote performed highly in: 1)consistency with standards and 2)recognition rather than recall. eNote needs improvement in: 1)help and documentation, 2)aesthetic and minimalist design, 3)error prevention, 4)helping users recognize, diagnosis, and recover from errors, and 5)flexibility and efficiency of use. The heuristic ev...
Neural basis of scientific innovation induced by heuristic prototype.
Directory of Open Access Journals (Sweden)
Junlong Luo
Full Text Available A number of major inventions in history have been based on bionic imitation. Heuristics, by applying biological systems to the creation of artificial devices and machines, might be one of the most critical processes in scientific innovation. In particular, prototype heuristics propositions that innovation may engage automatic activation of a prototype such as a biological system to form novel associations between a prototype's function and problem-solving. We speculated that the cortical dissociation between the automatic activation and forming novel associations in innovation is critical point to heuristic creativity. In the present study, novel and old scientific innovations (NSI and OSI were selected as experimental materials in using learning-testing paradigm to explore the neural basis of scientific innovation induced by heuristic prototype. College students were required to resolve NSI problems (to which they did not know the answers and OSI problems (to which they knew the answers. From two fMRI experiments, our results showed that the subjects could resolve NSI when provided with heuristic prototypes. In Experiment 1, it was found that the lingual gyrus (LG; BA18 might be related to prototype heuristics in college students resolving NSI after learning a relative prototype. In Experiment 2, the LG (BA18 and precuneus (BA31 were significantly activated for NSI compared to OSI when college students learned all prototypes one day before the test. In addition, the mean beta-values of these brain regions of NSI were all correlated with the behavior accuracy of NSI. As our hypothesis indicated, the findings suggested that the LG might be involved in forming novel associations using heuristic information, while the precuneus might be involved in the automatic activation of heuristic prototype during scientific innovation.
Human Heuristics: Understanding the Impacts for Pharmaceutical Quality Risk Management
Calnan, Nuala
2011-01-01
This paper addresses the influence of human heuristics (Biases) on the assessment of risk. A core principle underpinning effective Risk Management is the principle that Risk Management explicitly addresses uncertainty i.e., that it explicitly takes account of uncertainty, the nature of that uncertainty, and how it can be addressed. Heuristics are cognitive behaviours which come into play when we make judgements in the presence of uncertainty. How these behaviours are manifested is still the s...
Sensitivity and detection limit analysis of silicon nanowire bio(chemical) sensors
Chen, S.; van den Berg, Albert; Carlen, Edwin
2015-01-01
This paper presents an analysis of the sensitivity and detection limit of silicon nanowire biosensors using an analytical model in combination with I-V and current noise measurements. The analysis shows that the limit of detection (LOD) and signal to noise ratio (SNR) can be optimized by determining
DEFF Research Database (Denmark)
Price, Jason Anthony; Nordblad, Mathias; Woodley, John
2014-01-01
This paper demonstrates the added benefits of using uncertainty and sensitivity analysis in the kinetics of enzymatic biodiesel production. For this study, a kinetic model by Fedosov and co-workers is used. For the uncertainty analysis the Monte Carlo procedure was used to statistically quantify...
An interdisciplinary heuristic evaluation method for universal building design.
Afacan, Yasemin; Erbug, Cigdem
2009-07-01
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Interliminal Design: Understanding cognitive heuristics to mitigate design distortion
Directory of Open Access Journals (Sweden)
Andrew McCollough
2014-12-01
Full Text Available Cognitive heuristics are mental shortcuts adapted over time to enable rapid interpretation of our complex environment. They are intrinsic to human cognition and resist modification. Heuristics applied outside the context to which they are best suited are termed cognitive bias, and are the cause of systematic errors in judgment and reasoning. As both a cognitive and intuitive discipline, design by individuals is vulnerable to context-inappropriate heuristic usage. Designing in groups can act positively to counterbalance these tendencies, but is subject to heuristic misuse and biases particular to social environments. Mismatch between desired and actual outcomes– termed here, design distortion – occurs when such usage goes unnoticed and unaddressed, and can affect multiple dimensions of a system. We propose a methodology, interliminal design, emerging from the Program in Collaborative Design at Pacific Northwest College of Art, to specifically address the influence of cognitive heuristics in design. This adaptive approach involves reflective, dialogic, inquiry-driven practices intended to increase awareness of heuristic usage, and identify aspects of the design process vulnerable to misuse on both individual and group levels. By facilitating the detection and mitigation of potentially costly errors in judgment and decision-making that create distortion, such metacognitive techniques can meaningfully improve design.
Sensitivity analysis of a branching process evolving on a network with application in epidemiology
Hautphenne, Sophie; Delvenne, Jean-Charles; Blondel, Vincent D
2015-01-01
We perform an analytical sensitivity analysis for a model of a continuous-time branching process evolving on a fixed network. This allows us to determine the relative importance of the model parameters to the growth of the population on the network. We then apply our results to the early stages of an influenza-like epidemic spreading among a set of cities connected by air routes in the United States. We also consider vaccination and analyze the sensitivity of the total size of the epidemic with respect to the fraction of vaccinated people. Our analysis shows that the epidemic growth is more sensitive with respect to transmission rates within cities than travel rates between cities. More generally, we highlight the fact that branching processes offer a powerful stochastic modeling tool with analytical formulas for sensitivity which are easy to use in practice.
A Rasch analysis of nurses' ethical sensitivity to the norms of the code of conduct.
González-de Paz, Luis; Kostov, Belchin; Sisó-Almirall, Antoni; Zabalegui-Yárnoz, Adela
2012-10-01
To develop an instrument to measure nurses' ethical sensitivity and, secondarily, to use this instrument to compare nurses' ethical sensitivity between groups. Professional codes of conduct are widely accepted guidelines. However, their efficacy in daily nursing practice and influence on ethical sensitivity is controversial. A descriptive cross-sectional study was conducted. One hundred and forty-three registered nurses from Barcelona (Spain) participated in the study, of whom 45.83% were working in primary health care and 53.84% in hospital wards. A specifically designed confidential, self-administered questionnaire assessing ethical sensitivity was developed. Knowledge of the nursing code and data on ethical sensitivity were summarised, with the quality of the questionnaire assessed using Rasch analysis. Item on knowledge of the code showed that one-third of nurses knew the contents of the code and two-thirds had limited knowledge. To fit the Rasch model, it was necessary to rescore the rating scale from five to three categories. Residual principal components analysis confirmed the unidimensionality of the scale. Three items of the questionnaire presented fit problems with the Rasch model. Although nurses generally have high ethical sensitivity to their code of conduct, differences were found according to years of professional practice, place of work and knowledge of the code (pcode was high. However, many factors might influence the degree of ethical sensitivity. Further research to measure ethical sensitivity using invariant measures such as Rasch units would be valuable. Other factors, such as assertiveness or courage, should be considered to improve ethical sensitivity to the code of conduct. Rigorous measurement studies and analysis in applied ethics are needed to assess ethical performance in practice. © 2012 Blackwell Publishing Ltd.
A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems
Abtahi, Amir-Reza; Bijari, Afsane
2017-09-01
In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.
Carlis, John; Bruso, Kelsey
2012-03-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing.
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT
Bruso, Kelsey
2012-01-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923
Norris, Gareth
2015-01-01
The increasing use of multi-media applications, trial presentation software and computer generated exhibits (CGE) has raised questions as to the potential impact of the use of presentation technology on juror decision making. A significant amount of the commentary on the manner in which CGE exerts legal influence is largely anecdotal; empirical examinations too are often devoid of established theoretical rationalisations. This paper will examine a range of established judgement heuristics (for example, the attribution error, representativeness, simulation), in order to establish their appropriate application for comprehending legal decisions. Analysis of both past cases and empirical studies will highlight the potential for heuristics and biases to be restricted or confounded by the use of CGE. The paper will conclude with some wider discussion on admissibility, access to justice, and emerging issues in the use of multi-media in court. Copyright © 2015 Elsevier Ltd. All rights reserved.
A heuristic expert system for forest fire guidance in Greece.
Iliadis, Lazaros S; Papastavrou, Anastasios K; Lefakis, Panagiotis D
2002-07-01
Forests and forestlands are common inheritance for all Greeks and a piece of the national wealth that must be handed over to the next generations in the best possible condition. After 1974, Greece faces a severe forest fire problem and forest fire forecasting is the process that will enable the Greek ministry of Agriculture to reduce the destruction. This paper describes the basic design principles of an Expert System that performs forest fire forecasting (for the following fire season) and classification of the prefectures of Greece into forest fire risk zones. The Expert system handles uncertainty and uses heuristics in order to produce scenarios based on the presence or absence of various qualitative factors. The initial research focused on the construction of a mathematical model which attempted to describe the annual number of forest fires and burnt area in Greece based on historical data. However this has proven to be impossible using regression analysis and time series. A closer analysis of the fire data revealed that two qualitative factors dramatically affect the number of forest fires and the hectares of burnt areas annually. The first is political stability and national elections and the other is drought cycles. Heuristics were constructed that use political stability and drought cycles, to provide forest fire guidance. Fuzzy logic was applied to produce a fuzzy expected interval for each prefecture of Greece. A fuzzy expected interval is a narrow interval of values that best describes the situation in the country or a part of the country for a certain time period. A successful classification of the prefectures of Greece in forest fire risk zones was done by the system, by comparing the fuzzy expected intervals to each other. The system was tested for the years 1994 and 1995. The testing has clearly shown that the system can predict accurately, the number of forest fires for each prefecture for the following year. The average accuracy was as high as 85
Kamasani, Swapna; Akula, Sravani; Sivan, Sree Kanth; Manga, Vijjulatha; Duyster, Justus; Vudem, Dashavantha Reddy; Kancha, Rama Krishna
2017-05-01
The ABL kinase inhibitor imatinib has been used as front-line therapy for Philadelphia-positive chronic myeloid leukemia. However, a significant proportion of imatinib-treated patients relapse due to occurrence of mutations in the ABL kinase domain. Although inhibitor sensitivity for a set of mutations was reported, the role of less frequent ABL kinase mutations in drug sensitivity/resistance is not known. Moreover, recent reports indicate distinct resistance profiles for second-generation ABL inhibitors. We thus employed a computational approach to predict drug sensitivity of 234 point mutations that were reported in chronic myeloid leukemia patients. Initial validation analysis of our approach using a panel of previously studied frequent mutations indicated that the computational data generated in this study correlated well with the published experimental/clinical data. In addition, we present drug sensitivity profiles for remaining point mutations by computational docking analysis using imatinib as well as next generation ABL inhibitors nilotinib, dasatinib, bosutinib, axitinib, and ponatinib. Our results indicate distinct drug sensitivity profiles for ABL mutants toward kinase inhibitors. In addition, drug sensitivity profiles of a set of compound mutations in ABL kinase were also presented in this study. Thus, our large scale computational study provides comprehensive sensitivity/resistance profiles of ABL mutations toward specific kinase inhibitors.
Backman, John; Wood, Curtis R.; Auvinen, Mikko; Kangas, Leena; Hannuniemi, Hanna; Karppinen, Ari; Kukkonen, Jaakko
2017-10-01
The meteorological input parameters for urban- and local-scale dispersion models can be evaluated by preprocessing meteorological observations, using a boundary-layer parameterisation model. This study presents a sensitivity analysis of a meteorological preprocessor model (MPP-FMI) that utilises readily available meteorological data as input. The sensitivity of the preprocessor to meteorological input was analysed using algorithmic differentiation (AD). The AD tool used was TAPENADE. The AD method numerically evaluates the partial derivatives of functions that are implemented in a computer program. In this study, we focus on the evaluation of vertical fluxes in the atmosphere and in particular on the sensitivity of the predicted inverse Obukhov length and friction velocity on the model input parameters. The study shows that the estimated inverse Obukhov length and friction velocity are most sensitive to wind speed and second most sensitive to solar irradiation. The dependency on wind speed is most pronounced at low wind speeds. The presented results have implications for improving the meteorological preprocessing models. AD is shown to be an efficient tool for studying the ranges of sensitivities of the predicted parameters on the model input values quantitatively. A wider use of such advanced sensitivity analysis methods could potentially be very useful in analysing and improving the models used in atmospheric sciences.
SENSITIVITY ANALYSIS OF BUILDING STRUCTURES WITHIN THE SCOPE OF ENERGY, ENVIRONMENT AND INVESTMENT
Directory of Open Access Journals (Sweden)
František Kulhánek
2015-10-01
Full Text Available The primary objective of this paper is to prove the feasibility of sensitivity analysis with dominant weight method for structure parts of envelope of buildings inclusive of energy; ecological and financial assessments, and determination of different designs for same structural part via multi-criteria assessment with theoretical example designs ancillary. Multi-criteria assessment (MCA of different structural designs or in other word alternatives aims to find the best available alternative. The application of sensitivity analysis technique in this paper bases on dominant weighting method. In this research, to choose the best thermal insulation design in the case of that more than one projection, simultaneously, criteria of total thickness (T; heat transfer coefficient (U through the cross section; global warming potential (GWP; acid produce (AP; primary energy content (PEI non renewable and cost per m2 (C are investigated for all designs via sensitivity analysis. Three different designs for external wall (over soil which are convenient with regard to globally suggested energy features for passive house design are investigated through the mentioned six projections. By creating a given set of scenarios; depending upon the importance of each criterion, sensitivity analysis is distributed. As conclusion, uncertainty in the output of model is attributed to different sources in the model input. In this manner, determination of the best available design is achieved. The original outlook and the outlook afterwards the sensitivity analysis are visualized, that enables easily to choose the optimum design within the scope of verified components.
Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models
Energy Technology Data Exchange (ETDEWEB)
Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)
2011-04-15
Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
Active Fault Diagnosis for Hybrid Systems Based on Sensitivity Analysis and EKF
DEFF Research Database (Denmark)
Gholami, Mehdi; Schiøler, Henrik; Bak, Thomas
2011-01-01
An active fault diagnosis approach for different kinds of faults is proposed. The input of the approach is designed off-line based on sensitivity analysis such that the maximum sensitivity for each individual system parameter is obtained. Using maximum sensitivity, results in a better precision...... in the estimation of the corresponding parameter. The fault detection and isolation is done by comparing the nominal parameters with those estimated by Extended Kalman Filter (EKF). In study, Gaussian noise is used as the input disturbance as well as the measurement noise for simulation. The method is implemented...
Directory of Open Access Journals (Sweden)
Lucy Ana Vilela Staut
2017-11-01
Full Text Available This article presents the results of a case study evaluated through modified heuristics-based method in conformation to the Universal Design principles, in a Planned Commercial Center. The main objective is to analyze the application potential of the modified heuristic evaluation as a universal usability inspection method allied to the pluralistic method of inspection. The evaluation is composed of three subsequent sessions: (1 pre-interview, in which five expert interdisciplinary evaluators analyze the Planned Commercial Center’s architectonic projects concerning the Universal Design principles and spatial accessibility and fill out a structured interview form; (2 activities in settings, in which each expert answers the form again during the development of pre-defined activities; and (3 post-interview, in which the forms were once again answered through a reanalysis of the architectonic projects. As result, it was identified that the modified heuristic evaluation can instate effectively the project evaluation before the fact and in the post-occupation evaluation, providing interaction between the architectonic project analysis and the space use. Therefore, further studies on universal usability and its use in inclusive spaces are of paramount importance. This research presented itself as an initial step for the universal usability evaluation in architecture, and recommends applied tests to the architectonic context, because they highlight the importance of interdependent elements analysis inside the constructed environment, and must be implemented to the universal project process.