WorldWideScience

Sample records for pareto optimal set

  1. Prederivatives of gamma paraconvex set-valued maps and Pareto optimality conditions for set optimization problems.

    Science.gov (United States)

    Huang, Hui; Ning, Jixian

    2017-01-01

    Prederivatives play an important role in the research of set optimization problems. First, we establish several existence theorems of prederivatives for γ -paraconvex set-valued mappings in Banach spaces with [Formula: see text]. Then, in terms of prederivatives, we establish both necessary and sufficient conditions for the existence of Pareto minimal solution of set optimization problems.

  2. COMPROMISE, OPTIMAL AND TRACTIONAL ACCOUNTS ON PARETO SET

    Directory of Open Access Journals (Sweden)

    V. V. Lahuta

    2010-11-01

    Full Text Available The problem of optimum traction calculations is considered as a problem about optimum distribution of a resource. The dynamic programming solution is based on a step-by-step calculation of set of points of Pareto-optimum values of a criterion function (energy expenses and a resource (time.

  3. Approximating the Pareto set of multiobjective linear programs via robust optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a

  4. Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust

  5. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  6. Pareto optimal pairwise sequence alignment.

    Science.gov (United States)

    DeRonne, Kevin W; Karypis, George

    2013-01-01

    Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.

  7. Pareto Optimization Identifies Diverse Set of Phosphorylation Signatures Predicting Response to Treatment with Dasatinib.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2015-01-01

    Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.

  8. Pareto-optimal alloys

    DEFF Research Database (Denmark)

    Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei

    2003-01-01

    Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....

  9. Pareto optimization in algebraic dynamic programming.

    Science.gov (United States)

    Saule, Cédric; Giegerich, Robert

    2015-01-01

    Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.

  10. The Incompatibility of Pareto Optimality and Dominant-Strategy Incentive Compatibility in Sufficiently-Anonymous Budget-Constrained Quasilinear Settings

    Directory of Open Access Journals (Sweden)

    Rica Gonen

    2013-11-01

    Full Text Available We analyze the space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal combinatorial auctions. We examine a model with multidimensional types, nonidentical items, private values and quasilinear preferences for the players with one relaxation; the players are subject to publicly-known budget constraints. We show that the space includes dictatorial mechanisms and that if dictatorial mechanisms are ruled out by a natural anonymity property, then an impossibility of design is revealed. The same impossibility naturally extends to other abstract mechanisms with an arbitrary outcome set if one maintains the original assumptions of players with quasilinear utilities, public budgets and nonnegative prices.

  11. Pareto optimality in infinite horizon linear quadratic differential games

    NARCIS (Netherlands)

    Reddy, P.V.; Engwerda, J.C.

    2013-01-01

    In this article we derive conditions for the existence of Pareto optimal solutions for linear quadratic infinite horizon cooperative differential games. First, we present a necessary and sufficient characterization for Pareto optimality which translates to solving a set of constrained optimal

  12. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  13. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  14. RNA-Pareto: interactive analysis of Pareto-optimal RNA sequence-structure alignments.

    Science.gov (United States)

    Schnattinger, Thomas; Schöning, Uwe; Marchfelder, Anita; Kestler, Hans A

    2013-12-01

    Incorporating secondary structure information into the alignment process improves the quality of RNA sequence alignments. Instead of using fixed weighting parameters, sequence and structure components can be treated as different objectives and optimized simultaneously. The result is not a single, but a Pareto-set of equally optimal solutions, which all represent different possible weighting parameters. We now provide the interactive graphical software tool RNA-Pareto, which allows a direct inspection of all feasible results to the pairwise RNA sequence-structure alignment problem and greatly facilitates the exploration of the optimal solution set.

  15. Post Pareto optimization-A case

    Science.gov (United States)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  16. Efficient approximation of black-box functions and Pareto sets

    NARCIS (Netherlands)

    Rennen, G.

    2009-01-01

    In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the

  17. Pareto optimality in organelle energy metabolism analysis.

    Science.gov (United States)

    Angione, Claudio; Carapezza, Giovanni; Costanza, Jole; Lió, Pietro; Nicosia, Giuseppe

    2013-01-01

    In low and high eukaryotes, energy is collected or transformed in compartments, the organelles. The rich variety of size, characteristics, and density of the organelles makes it difficult to build a general picture. In this paper, we make use of the Pareto-front analysis to investigate the optimization of energy metabolism in mitochondria and chloroplasts. Using the Pareto optimality principle, we compare models of organelle metabolism on the basis of single- and multiobjective optimization, approximation techniques (the Bayesian Automatic Relevance Determination), robustness, and pathway sensitivity analysis. Finally, we report the first analysis of the metabolic model for the hydrogenosome of Trichomonas vaginalis, which is found in several protozoan parasites. Our analysis has shown the importance of the Pareto optimality for such comparison and for insights into the evolution of the metabolism from cytoplasmic to organelle bound, involving a model order reduction. We report that Pareto fronts represent an asymptotic analysis useful to describe the metabolism of an organism aimed at maximizing concurrently two or more metabolite concentrations.

  18. How Well Do We Know Pareto Optimality?

    Science.gov (United States)

    Mathur, Vijay K.

    1991-01-01

    Identifies sources of ambiguity in economics textbooks' discussion of the condition for efficient output mix. Points out that diverse statements without accompanying explanations create confusion among students. Argues that conflicting views concerning the concept of Pareto optimality as one source of ambiguity. Suggests clarifying additions to…

  19. Performance-based Pareto optimal design

    NARCIS (Netherlands)

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  20. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning

    International Nuclear Information System (INIS)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-01-01

    promise in optimizing the number of beams. Conclusions: This initial evaluation of the evolutionary optimization software tool pareto for IMRT treatment planning demonstrates feasibility and provides motivation for continued development. Advantages of this approach over current commercial methods for treatment planning are many, including: (1) fully automated optimization that avoids human controlled iterative optimization and potentially improves overall process efficiency, (2) formulation of the problem as a true multiobjective one, which provides an optimized set of Pareto nondominated solutions refined over hundreds of generations and compiled from thousands of parameter sets explored during the run, and (3) rapid exploration of the final nondominated set accomplished by a graphical interface used to select the best treatment option for the patient.

  1. PARETO: A novel evolutionary optimization approach to multiobjective IMRT planning.

    Science.gov (United States)

    Fiege, Jason; McCurdy, Boyd; Potrebko, Peter; Champion, Heather; Cull, Andrew

    2011-09-01

    of beams. This initial evaluation of the evolutionary optimization software tool pareto for IMRT treatment planning demonstrates feasibility and provides motivation for continued development. Advantages of this approach over current commercial methods for treatment planning are many, including: (1) fully automated optimization that avoids human controlled iterative optimization and potentially improves overall process efficiency, (2) formulation of the problem as a true multiobjective one, which provides an optimized set of Pareto nondominated solutions refined over hundreds of generations and compiled from thousands of parameter sets explored during the run, and (3) rapid exploration of the final nondominated set accomplished by a graphical interface used to select the best treatment option for the patient.

  2. Pareto-optimal phylogenetic tree reconciliation.

    Science.gov (United States)

    Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S; Kellis, Manolis

    2014-06-15

    Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. . © The Author 2014. Published by Oxford University Press.

  3. Optimization of externalities using DTM measures: a Pareto optimal multi objective optimization using the evolutionary algorithm SPEA2+

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart

    2010-01-01

    Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.

  4. Pareto Optimal Design for Synthetic Biology.

    Science.gov (United States)

    Patanè, Andrea; Santoro, Andrea; Costanza, Jole; Carapezza, Giovanni; Nicosia, Giuseppe

    2015-08-01

    Recent advances in synthetic biology call for robust, flexible and efficient in silico optimization methodologies. We present a Pareto design approach for the bi-level optimization problem associated to the overproduction of specific metabolites in Escherichia coli. Our method efficiently explores the high dimensional genetic manipulation space, finding a number of trade-offs between synthetic and biological objectives, hence furnishing a deeper biological insight to the addressed problem and important results for industrial purposes. We demonstrate the computational capabilities of our Pareto-oriented approach comparing it with state-of-the-art heuristics in the overproduction problems of i) 1,4-butanediol, ii) myristoyl-CoA, i ii) malonyl-CoA , iv) acetate and v) succinate. We show that our algorithms are able to gracefully adapt and scale to more complex models and more biologically-relevant simulations of the genetic manipulations allowed. The Results obtained for 1,4-butanediol overproduction significantly outperform results previously obtained, in terms of 1,4-butanediol to biomass formation ratio and knock-out costs. In particular overproduction percentage is of +662.7%, from 1.425 mmolh⁻¹gDW⁻¹ (wild type) to 10.869 mmolh⁻¹gDW⁻¹, with a knockout cost of 6. Whereas, Pareto-optimal designs we have found in fatty acid optimizations strictly dominate the ones obtained by the other methodologies, e.g., biomass and myristoyl-CoA exportation improvement of +21.43% (0.17 h⁻¹) and +5.19% (1.62 mmolh⁻¹gDW⁻¹), respectively. Furthermore CPU time required by our heuristic approach is more than halved. Finally we implement pathway oriented sensitivity analysis, epsilon-dominance analysis and robustness analysis to enhance our biological understanding of the problem and to improve the optimization algorithm capabilities.

  5. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    Science.gov (United States)

    Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2006-12-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

  6. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

    International Nuclear Information System (INIS)

    Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk

    2006-01-01

    In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning

  7. Optimization of Wind Turbine Airfoil Using Nondominated Sorting Genetic Algorithm and Pareto Optimal Front

    Directory of Open Access Journals (Sweden)

    Ziaul Huque

    2012-01-01

    Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.

  8. Phase transitions in Pareto optimal complex networks.

    Science.gov (United States)

    Seoane, Luís F; Solé, Ricard

    2015-09-01

    The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.

  9. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  10. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  11. A Pareto Optimal Auction Mechanism for Carbon Emission Rights

    Directory of Open Access Journals (Sweden)

    Mingxi Wang

    2014-01-01

    Full Text Available The carbon emission rights do not fit well into the framework of existing multi-item auction mechanisms because of their own unique features. This paper proposes a new auction mechanism which converges to a unique Pareto optimal equilibrium in a finite number of periods. In the proposed auction mechanism, the assignment outcome is Pareto efficient and the carbon emission rights’ resources are efficiently used. For commercial application and theoretical completeness, both discrete and continuous markets—represented by discrete and continuous bid prices, respectively—are examined, and the results show the existence of a Pareto optimal equilibrium under the constraint of individual rationality. With no ties, the Pareto optimal equilibrium can be further proven to be unique.

  12. Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.

    Science.gov (United States)

    Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip

    2016-01-01

    In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.

  13. Determination of Pareto frontier in multi-objective maintenance optimization

    International Nuclear Information System (INIS)

    Certa, Antonella; Galante, Giacomo; Lupo, Toni; Passannanti, Gianfranco

    2011-01-01

    The objective of a maintenance policy generally is the global maintenance cost minimization that involves not only the direct costs for both the maintenance actions and the spare parts, but also those ones due to the system stop for preventive maintenance and the downtime for failure. For some operating systems, the failure event can be dangerous so that they are asked to operate assuring a very high reliability level between two consecutive fixed stops. The present paper attempts to individuate the set of elements on which performing maintenance actions so that the system can assure the required reliability level until the next fixed stop for maintenance, minimizing both the global maintenance cost and the total maintenance time. In order to solve the previous constrained multi-objective optimization problem, an effective approach is proposed to obtain the best solutions (that is the Pareto optimal frontier) among which the decision maker will choose the more suitable one. As well known, describing the whole Pareto optimal frontier generally is a troublesome task. The paper proposes an algorithm able to rapidly overcome this problem and its effectiveness is shown by an application to a case study regarding a complex series-parallel system.

  14. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  15. Projections onto the Pareto surface in multicriteria radiation therapy optimization.

    Science.gov (United States)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-10-01

    To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose-volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose-volume histogram constraints were used. No consistent improvements in target homogeneity were observed. There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.

  16. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    International Nuclear Information System (INIS)

    Bokrantz, Rasmus; Miettinen, Kaisa

    2015-01-01

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation of some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan

  17. PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.

    AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.

  18. Pareto-Optimal Estimates of California Precipitation Change

    Science.gov (United States)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  19. Diversity comparison of Pareto front approximations in many-objective optimization.

    Science.gov (United States)

    Li, Miqing; Yang, Shengxiang; Liu, Xiaohui

    2014-12-01

    Diversity assessment of Pareto front approximations is an important issue in the stochastic multiobjective optimization community. Most of the diversity indicators in the literature were designed to work for any number of objectives of Pareto front approximations in principle, but in practice many of these indicators are infeasible or not workable when the number of objectives is large. In this paper, we propose a diversity comparison indicator (DCI) to assess the diversity of Pareto front approximations in many-objective optimization. DCI evaluates relative quality of different Pareto front approximations rather than provides an absolute measure of distribution for a single approximation. In DCI, all the concerned approximations are put into a grid environment so that there are some hyperboxes containing one or more solutions. The proposed indicator only considers the contribution of different approximations to nonempty hyperboxes. Therefore, the computational cost does not increase exponentially with the number of objectives. In fact, the implementation of DCI is of quadratic time complexity, which is fully independent of the number of divisions used in grid. Systematic experiments are conducted using three groups of artificial Pareto front approximations and seven groups of real Pareto front approximations with different numbers of objectives to verify the effectiveness of DCI. Moreover, a comparison with two diversity indicators used widely in many-objective optimization is made analytically and empirically. Finally, a parametric investigation reveals interesting insights of the division number in grid and also offers some suggested settings to the users with different preferences.

  20. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    Science.gov (United States)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  1. Can we reach Pareto optimal outcomes using bottom-up approaches?

    NARCIS (Netherlands)

    V. Sanchez-Anguix (Victor); R. Aydoğan (Reyhan); T. Baarslag (Tim); C.M. Jonker (Catholijn)

    2016-01-01

    textabstractClassically, disciplines like negotiation and decision making have focused on reaching Pareto optimal solutions due to its stability and efficiency properties. Despite the fact that many practical and theoretical algorithms have successfully attempted to provide Pareto optimal solutions,

  2. A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems.

    Science.gov (United States)

    Brusco, Michael J; Steinley, Douglas

    2012-02-01

    There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n × n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. © 2011 The British Psychological Society.

  3. Decomposition and Simplification of Multivariate Data using Pareto Sets.

    Science.gov (United States)

    Huettenberger, Lars; Heine, Christian; Garth, Christoph

    2014-12-01

    Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.

  4. Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization.

    Science.gov (United States)

    Craft, David

    2010-10-01

    A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    Energy Technology Data Exchange (ETDEWEB)

    Bhunia, Uttam, E-mail: ubhunia@vecc.gov.in; Saha, Subimal; Chakrabarti, Alok

    2014-10-15

    Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.

  6. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    International Nuclear Information System (INIS)

    Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok

    2014-01-01

    Highlights: • The optimization approach minimizes both the magnet size and necessary cable length of a sectored toroidal SMES unit. • Design approach is suitable for low temperature superconducting cable suitable for medium size SMES unit. • It investigates coil parameters with respect to practical engineering aspects. - Abstract: A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium–titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy

  7. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  8. Pareto optimal design of sectored toroidal superconducting magnet for SMES

    Science.gov (United States)

    Bhunia, Uttam; Saha, Subimal; Chakrabarti, Alok

    2014-10-01

    A novel multi-objective optimization design approach for sectored toroidal superconducting magnetic energy storage coil has been developed considering the practical engineering constraints. The objectives include the minimization of necessary superconductor length and torus overall size or volume, which determines a significant part of cost towards realization of SMES. The best trade-off between the necessary conductor length for winding and magnet overall size is achieved in the Pareto-optimal solutions, the compact magnet size leads to increase in required superconducting cable length or vice versa The final choice among Pareto optimal configurations can be done in relation to other issues such as AC loss during transient operation, stray magnetic field at outside the coil assembly, and available discharge period, which is not considered in the optimization process. The proposed design approach is adapted for a 4.5 MJ/1 MW SMES system using low temperature niobium-titanium based Rutherford type cable. Furthermore, the validity of the representative Pareto solutions is confirmed by finite-element analysis (FEA) with a reasonably acceptable accuracy.

  9. Kantian Optimization, Social Ethos, and Pareto Efficiency

    OpenAIRE

    John E. Roemer

    2012-01-01

    Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate an interdependent kind of optimizing behavior, called Kantian. It is shown that in simple economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive ...

  10. Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.

    Science.gov (United States)

    Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric

    2010-07-20

    Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.

  11. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  12. Evolutionary tradeoffs, Pareto optimality and the morphology of ammonite shells.

    Science.gov (United States)

    Tendler, Avichai; Mayo, Avraham; Alon, Uri

    2015-03-07

    Organisms that need to perform multiple tasks face a fundamental tradeoff: no design can be optimal at all tasks at once. Recent theory based on Pareto optimality showed that such tradeoffs lead to a highly defined range of phenotypes, which lie in low-dimensional polyhedra in the space of traits. The vertices of these polyhedra are called archetypes- the phenotypes that are optimal at a single task. To rigorously test this theory requires measurements of thousands of species over hundreds of millions of years of evolution. Ammonoid fossil shells provide an excellent model system for this purpose. Ammonoids have a well-defined geometry that can be parameterized using three dimensionless features of their logarithmic-spiral-shaped shells. Their evolutionary history includes repeated mass extinctions. We find that ammonoids fill out a pyramid in morphospace, suggesting five specific tasks - one for each vertex of the pyramid. After mass extinctions, surviving species evolve to refill essentially the same pyramid, suggesting that the tasks are unchanging. We infer putative tasks for each archetype, related to economy of shell material, rapid shell growth, hydrodynamics and compactness. These results support Pareto optimality theory as an approach to study evolutionary tradeoffs, and demonstrate how this approach can be used to infer the putative tasks that may shape the natural selection of phenotypes.

  13. Pareto optimization of an industrial ecosystem: sustainability maximization

    Directory of Open Access Journals (Sweden)

    J. G. M.-S. Monteiro

    2010-09-01

    Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.

  14. The application of analytical methods to the study of Pareto - optimal control systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2014-01-01

    Full Text Available The subject of research articles - - methods of multicriteria optimization and their application for parametric synthesis of double-circuit control systems in conditions of inconsistency of individual criteria. The basis for solving multicriteria problems is a fundamental principle of a multi-criteria choice - the principle of the Edgeworth - Pareto. Getting Pareto - optimal variants due to inconsistency of individual criteria does not mean reaching a final decision. Set these options only offers the designer (DM.An important issue when using traditional numerical methods is their computational cost. An example is the use of methods of sounding the parameter space, including with use of uniform grids and uniformly distributed sequences. Very complex computational task is the application of computer methods of approximation bounds of Pareto.The purpose of this work is the development of a fairly simple search methods of Pareto - optimal solutions for the case of the criteria set out in the analytical form.The proposed solution is based on the study of the properties of the analytical dependences of criteria. The case is not covered so far in the literature, namely, the topology of the task, in which no touch of indifference curves (lines level. It is shown that for such tasks may be earmarked for compromise solutions. Prepositional use of the angular position of antigradient to the indifference curves in the parameter space relative to the coordinate axes. Formulated propositions on the characteristics of comonotonicity and contramonotonicity and angular characteristics of antigradient to determine Pareto optimal solutions. Considers the General algorithm of calculation: determine the scope of permissible values of parameters; investigates properties comonotonicity and contraventanas; to build an equal level (indifference curves; determined touch type: single sided (task is not strictly multicriteria or bilateral (objective relates to the Pareto

  15. Pareto-optimal multi-objective design of airplane control systems

    Science.gov (United States)

    Schy, A. A.; Johnson, K. G.; Giesy, D. P.

    1980-01-01

    A constrained minimization algorithm for the computer aided design of airplane control systems to meet many requirements over a set of flight conditions is generalized using the concept of Pareto-optimization. The new algorithm yields solutions on the boundary of the achievable domain in objective space in a single run, whereas the older method required a sequence of runs to approximate such a limiting solution. However, Pareto-optimality does not guarantee a satisfactory design, since such solutions may emphasize some objectives at the expense of others. The designer must still interact with the program to obtain a well-balanced set of objectives. Using the example of a fighter lateral stability augmentation system (SAS) design over five flight conditions, several effective techniques are developed for obtaining well-balanced Pareto-optimal solutions. For comparison, one of these techniques is also used in a recently developed algorithm of Kreisselmeier and Steinhauser, which replaces the hard constraints with soft constraints, using a special penalty function. It is shown that comparable results can be obtained.

  16. A Knowledge-Informed and Pareto-Based Artificial Bee Colony Optimization Algorithm for Multi-Objective Land-Use Allocation

    Directory of Open Access Journals (Sweden)

    Lina Yang

    2018-02-01

    Full Text Available Land-use allocation is of great significance in urban development. This type of allocation is usually considered to be a complex multi-objective spatial optimization problem, whose optimized result is a set of Pareto-optimal solutions (Pareto front reflecting different tradeoffs in several objectives. However, obtaining a Pareto front is a challenging task, and the Pareto front obtained by state-of-the-art algorithms is still not sufficient. To achieve better Pareto solutions, taking the grid-representative land-use allocation problem with two objectives as an example, an artificial bee colony optimization algorithm for multi-objective land-use allocation (ABC-MOLA is proposed. In this algorithm, the traditional ABC’s search direction guiding scheme and solution maintaining process are modified. In addition, a knowledge-informed neighborhood search strategy, which utilizes the auxiliary knowledge of natural geography and spatial structures to facilitate the neighborhood spatial search around each solution, is developed to further improve the Pareto front’s quality. A series of comparison experiments (a simulated experiment with small data volume and a real-world data experiment for a large area shows that all the Pareto fronts obtained by ABC-MOLA totally dominate the Pareto fronts by other algorithms, which demonstrates ABC-MOLA’s effectiveness in achieving Pareto fronts of high quality.

  17. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    Science.gov (United States)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  18. Comprehensive preference optimization of an irreversible thermal engine using pareto based mutable smart bee algorithm and generalized regression neural network

    DEFF Research Database (Denmark)

    Mozaffari, Ahmad; Gorji-Bandpy, Mofid; Samadian, Pendar

    2013-01-01

    Optimizing and controlling of complex engineering systems is a phenomenon that has attracted an incremental interest of numerous scientists. Until now, a variety of intelligent optimizing and controlling techniques such as neural networks, fuzzy logic, game theory, support vector machines...... and stochastic algorithms were proposed to facilitate controlling of the engineering systems. In this study, an extended version of mutable smart bee algorithm (MSBA) called Pareto based mutable smart bee (PBMSB) is inspired to cope with multi-objective problems. Besides, a set of benchmark problems and four...... well-known Pareto based optimizing algorithms i.e. multi-objective bee algorithm (MOBA), multi-objective particle swarm optimization (MOPSO) algorithm, non-dominated sorting genetic algorithm (NSGA-II), and strength Pareto evolutionary algorithm (SPEA 2) are utilized to confirm the acceptable...

  19. Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

    Directory of Open Access Journals (Sweden)

    Vimal Savsani

    2017-01-01

    Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.

  20. Hybridization of Strength Pareto Multiobjective Optimization with Modified Cuckoo Search Algorithm for Rectangular Array.

    Science.gov (United States)

    Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah

    2017-04-20

    This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.

  1. Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective optimization.

    Science.gov (United States)

    Elhossini, Ahmed; Areibi, Shawki; Dony, Robert

    2010-01-01

    This paper proposes an efficient particle swarm optimization (PSO) technique that can handle multi-objective optimization problems. It is based on the strength Pareto approach originally used in evolutionary algorithms (EA). The proposed modified particle swarm algorithm is used to build three hybrid EA-PSO algorithms to solve different multi-objective optimization problems. This algorithm and its hybrid forms are tested using seven benchmarks from the literature and the results are compared to the strength Pareto evolutionary algorithm (SPEA2) and a competitive multi-objective PSO using several metrics. The proposed algorithm shows a slower convergence, compared to the other algorithms, but requires less CPU time. Combining PSO and evolutionary algorithms leads to superior hybrid algorithms that outperform SPEA2, the competitive multi-objective PSO (MO-PSO), and the proposed strength Pareto PSO based on different metrics.

  2. Choosing the optimal Pareto composition of the charge material for the manufacture of composite blanks

    Science.gov (United States)

    Zalazinsky, A. G.; Kryuchkov, D. I.; Nesterenko, A. V.; Titov, V. G.

    2017-12-01

    The results of an experimental study of the mechanical properties of pressed and sintered briquettes consisting of powders obtained from a high-strength VT-22 titanium alloy by plasma spraying with additives of PTM-1 titanium powder obtained by the hydride-calcium method and powder of PV-N70Yu30 nickel-aluminum alloy are presented. The task is set for the choice of an optimal charge material composition of a composite material providing the required mechanical characteristics and cost of semi-finished products and items. Pareto optimal values for the composition of the composite material charge have been obtained.

  3. Finding the Pareto Optimal Equitable Allocation of Homogeneous Divisible Goods Among Three Players

    Directory of Open Access Journals (Sweden)

    Marco Dall'Aglio

    2017-01-01

    Full Text Available We consider the allocation of a finite number of homogeneous divisible items among three players. Under the assumption that each player assigns a positive value to every item, we develop a simple algorithm that returns a Pareto optimal and equitable allocation. This is based on the tight relationship between two geometric objects of fair division: The Individual Pieces Set (IPS and the Radon-Nykodim Set (RNS. The algorithm can be considered as an extension of the Adjusted Winner procedure by Brams and Taylor to the three-player case, without the guarantee of envy-freeness. (original abstract

  4. Discrepancies between selected Pareto optimal plans and final deliverable plans in radiotherapy multi-criteria optimization.

    Science.gov (United States)

    Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-08-01

    Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. A Pareto-based multi-objective optimization algorithm to design energy-efficient shading devices

    International Nuclear Information System (INIS)

    Khoroshiltseva, Marina; Slanzi, Debora; Poli, Irene

    2016-01-01

    Highlights: • We present a multi-objective optimization algorithm for shading design. • We combine Harmony search and Pareto-based procedures. • Thermal and daylighting performances of external shading were considered. • We applied the optimization process to a residential social housing in Madrid. - Abstract: In this paper we address the problem of designing new energy-efficient static daylight devices that will surround the external windows of a residential building in Madrid. Shading devices can in fact largely influence solar gains in a building and improve thermal and lighting comforts by selectively intercepting the solar radiation and by reducing the undesirable glare. A proper shading device can therefore significantly increase the thermal performance of a building by reducing its energy demand in different climate conditions. In order to identify the set of optimal shading devices that allow a low energy consumption of the dwelling while maintaining high levels of thermal and lighting comfort for the inhabitants we derive a multi-objective optimization methodology based on Harmony Search and Pareto front approaches. The results show that the multi-objective approach here proposed is an effective procedure in designing energy efficient shading devices when a large set of conflicting objectives characterizes the performance of the proposed solutions.

  6. Computing the Pareto-Nash equilibrium set in finite multi-objective mixed-strategy games

    Directory of Open Access Journals (Sweden)

    Victoria Lozan

    2013-10-01

    Full Text Available The Pareto-Nash equilibrium set (PNES is described as intersection of graphs of efficient response mappings. The problem of PNES computing in finite multi-objective mixed-strategy games (Pareto-Nash games is considered. A method for PNES computing is studied. Mathematics Subject Classification 2010: 91A05, 91A06, 91A10, 91A43, 91A44.

  7. Active learning of Pareto fronts.

    Science.gov (United States)

    Campigotto, Paolo; Passerini, Andrea; Battiti, Roberto

    2014-03-01

    This paper introduces the active learning of Pareto fronts (ALP) algorithm, a novel approach to recover the Pareto front of a multiobjective optimization problem. ALP casts the identification of the Pareto front into a supervised machine learning task. This approach enables an analytical model of the Pareto front to be built. The computational effort in generating the supervised information is reduced by an active learning strategy. In particular, the model is learned from a set of informative training objective vectors. The training objective vectors are approximated Pareto-optimal vectors obtained by solving different scalarized problem instances. The experimental results show that ALP achieves an accurate Pareto front approximation with a lower computational effort than state-of-the-art estimation of distribution algorithms and widely known genetic techniques.

  8. A divide and conquer approach to determine the Pareto frontier for optimization of protein engineering experiments

    Science.gov (United States)

    He, Lu; Friedman, Alan M.; Bailey-Kellogg, Chris

    2016-01-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability vs. novelty, affinity vs. specificity, activity vs. immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not “dominated”; i.e., no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), in order to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, PEPFR (Protein Engineering Pareto FRontier), that hierarchically subdivides the objective space, employing appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. PMID:22180081

  9. Feasibility of identification of gamma knife planning strategies by identification of pareto optimal gamma knife plans.

    Science.gov (United States)

    Giller, C A

    2011-12-01

    The use of conformity indices to optimize Gamma Knife planning is common, but does not address important tradeoffs between dose to tumor and normal tissue. Pareto analysis has been used for this purpose in other applications, but not for Gamma Knife (GK) planning. The goal of this work is to use computer models to show that Pareto analysis may be feasible for GK planning to identify dosimetric tradeoffs. We define a GK plan A to be Pareto dominant to B if the prescription isodose volume of A covers more tumor but not more normal tissue than B, or if A covers less normal tissue but not less tumor than B. A plan is Pareto optimal if it is not dominated by any other plan. Two different Pareto optimal plans represent different tradeoffs between dose to tumor and normal tissue, because neither plan dominates the other. 'GK simulator' software calculated dose distributions for GK plans, and was called repetitively by a genetic algorithm to calculate Pareto dominant plans. Three irregular tumor shapes were tested in 17 trials using various combinations of shots. The mean number of Pareto dominant plans/trial was 59 ± 17 (sd). Different planning strategies were identified by large differences in shot positions, and 70 of the 153 coordinate plots (46%) showed differences of 5mm or more. The Pareto dominant plans dominated other nearby plans. Pareto dominant plans represent different dosimetric tradeoffs and can be systematically calculated using genetic algorithms. Automatic identification of non-intuitive planning strategies may be feasible with these methods.

  10. Test scheduling optimization for 3D network-on-chip based on cloud evolutionary algorithm of Pareto multi-objective

    Science.gov (United States)

    Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan

    2018-03-01

    In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.

  11. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    Science.gov (United States)

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2017-01-01

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  12. Diversity shrinkage: Cross-validating pareto-optimal weights to enhance diversity via hiring practices.

    Science.gov (United States)

    Song, Q Chelsea; Wee, Serena; Newman, Daniel A

    2017-12-01

    To reduce adverse impact potential and improve diversity outcomes from personnel selection, one promising technique is De Corte, Lievens, and Sackett's (2007) Pareto-optimal weighting strategy. De Corte et al.'s strategy has been demonstrated on (a) a composite of cognitive and noncognitive (e.g., personality) tests (De Corte, Lievens, & Sackett, 2008) and (b) a composite of specific cognitive ability subtests (Wee, Newman, & Joseph, 2014). Both studies illustrated how Pareto-weighting (in contrast to unit weighting) could lead to substantial improvement in diversity outcomes (i.e., diversity improvement), sometimes more than doubling the number of job offers for minority applicants. The current work addresses a key limitation of the technique-the possibility of shrinkage, especially diversity shrinkage, in the Pareto-optimal solutions. Using Monte Carlo simulations, sample size and predictor combinations were varied and cross-validated Pareto-optimal solutions were obtained. Although diversity shrinkage was sizable for a composite of cognitive and noncognitive predictors when sample size was at or below 500, diversity shrinkage was typically negligible for a composite of specific cognitive subtest predictors when sample size was at least 100. Diversity shrinkage was larger when the Pareto-optimal solution suggested substantial diversity improvement. When sample size was at least 100, cross-validated Pareto-optimal weights typically outperformed unit weights-suggesting that diversity improvement is often possible, despite diversity shrinkage. Implications for Pareto-optimal weighting, adverse impact, sample size of validation studies, and optimizing the diversity-job performance tradeoff are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  14. Modelling and Pareto optimization of heat transfer and flow coefficients in microchannels using GMDH type neural networks and genetic algorithms

    International Nuclear Information System (INIS)

    Amanifard, N.; Nariman-Zadeh, N.; Borji, M.; Khalkhali, A.; Habibdoust, A.

    2008-01-01

    Three-dimensional heat transfer characteristics and pressure drop of water flow in a set of rectangular microchannels are numerically investigated using Fluent and compared with those of experimental results. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks are then obtained for modelling of both pressure drop (ΔP) and Nusselt number (Nu) with respect to design variables such as geometrical parameters of microchannels, the amount of heat flux and the Reynolds number. Using such obtained polynomial neural networks, multi-objective genetic algorithms (GAs) (non-dominated sorting genetic algorithm, NSGA-II) with a new diversity preserving mechanism is then used for Pareto based optimization of microchannels considering two conflicting objectives such as (ΔP) and (Nu). It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of microchannels can be discovered by Pareto based multi-objective optimization of the obtained polynomial metamodels representing their heat transfer and flow characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modelling and the Pareto optimization approach

  15. The Successor Function and Pareto Optimal Solutions of Cooperative Differential Systems with Concavity. I

    DEFF Research Database (Denmark)

    Andersen, Kurt Munk; Sandqvist, Allan

    1997-01-01

    We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution.......We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution....

  16. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  17. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    Science.gov (United States)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  18. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  19. Household Labour Supply in Britain and Denmark: Some Interpretations Using a Model of Pareto Optimal Behaviour

    DEFF Research Database (Denmark)

    Barmby, Tim; Smith, Nina

    1996-01-01

    This paper analyses the labour supply behaviour of households in Denmark and Britain. It employs models in which the preferences of individuals within the household are explicitly represented. The households are then assumed to decide on their labour supply in a Pareto-Optimal fashion. Describing...

  20. Necessary and Sufficient Conditions for Pareto Optimality in Infinite Horizon Cooperative Differential Games

    NARCIS (Netherlands)

    Reddy, P.V.; Engwerda, J.C.

    2011-01-01

    In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for infinite horizon cooperative differential games. We consider games defined by non autonomous and discounted autonomous systems. The obtained results are used to analyze the regular

  1. MULTI-OBJECTIVE OPTIMAL DESIGN OF GROUNDWATER REMEDIATION SYSTEMS: APPLICATION OF THE NICHED PARETO GENETIC ALGORITHM (NPGA). (R826614)

    Science.gov (United States)

    A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...

  2. Pareto-Optimal Evaluation of Ultimate Limit States in Offshore Wind Turbine Structural Analysis

    Directory of Open Access Journals (Sweden)

    Michael Muskulus

    2015-12-01

    Full Text Available The ultimate capacity of support structures is checked with extreme loads. This is straightforward when the limit state equations depend on a single load component, and it has become common to report maxima for each load component. However, if more than one load component is influential, e.g., both axial force and bending moments, it is not straightforward how to define an extreme load. The combination of univariate maxima can be too conservative, and many different combinations of load components can result in the worst value of the limit state equations. The use of contemporaneous load vectors is typically non-conservative. Therefore, in practice, limit state checks are done for each possible load vector, from each time step of a simulation. This is not feasible when performing reliability assessments and structural optimization, where additional, time-consuming computations are involved for each load vector. We therefore propose to use Pareto-optimal loads, which are a small set of loads that together represent all possible worst case scenarios. Simulations with two reference wind turbines show that this approach can be very useful for jacket structures, whereas the design of monopiles is often governed by the bending moment only. Even in this case, the approach might be useful when approaching the structural limits during optimization.

  3. Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives

    International Nuclear Information System (INIS)

    Warmflash, Aryeh; Siggia, Eric D; Francois, Paul

    2012-01-01

    The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input–output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria. (paper)

  4. Pareto evolution of gene networks: an algorithm to optimize multiple fitness objectives.

    Science.gov (United States)

    Warmflash, Aryeh; Francois, Paul; Siggia, Eric D

    2012-10-01

    The computational evolution of gene networks functions like a forward genetic screen to generate, without preconceptions, all networks that can be assembled from a defined list of parts to implement a given function. Frequently networks are subject to multiple design criteria that cannot all be optimized simultaneously. To explore how these tradeoffs interact with evolution, we implement Pareto optimization in the context of gene network evolution. In response to a temporal pulse of a signal, we evolve networks whose output turns on slowly after the pulse begins, and shuts down rapidly when the pulse terminates. The best performing networks under our conditions do not fall into categories such as feed forward and negative feedback that also encode the input-output relation we used for selection. Pareto evolution can more efficiently search the space of networks than optimization based on a single ad hoc combination of the design criteria.

  5. Using Pareto optimality to explore the topology and dynamics of the human connectome.

    Science.gov (United States)

    Avena-Koenigsberger, Andrea; Goñi, Joaquín; Betzel, Richard F; van den Heuvel, Martijn P; Griffa, Alessandra; Hagmann, Patric; Thiran, Jean-Philippe; Sporns, Olaf

    2014-10-05

    Graph theory has provided a key mathematical framework to analyse the architecture of human brain networks. This architecture embodies an inherently complex relationship between connection topology, the spatial arrangement of network elements, and the resulting network cost and functional performance. An exploration of these interacting factors and driving forces may reveal salient network features that are critically important for shaping and constraining the brain's topological organization and its evolvability. Several studies have pointed to an economic balance between network cost and network efficiency with networks organized in an 'economical' small-world favouring high communication efficiency at a low wiring cost. In this study, we define and explore a network morphospace in order to characterize different aspects of communication efficiency in human brain networks. Using a multi-objective evolutionary approach that approximates a Pareto-optimal set within the morphospace, we investigate the capacity of anatomical brain networks to evolve towards topologies that exhibit optimal information processing features while preserving network cost. This approach allows us to investigate network topologies that emerge under specific selection pressures, thus providing some insight into the selectional forces that may have shaped the network architecture of existing human brains.

  6. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  7. TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification

    International Nuclear Information System (INIS)

    Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D

    2014-01-01

    Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been

  8. TU-C-17A-01: A Data-Based Development for Pratical Pareto Optimality Assessment and Identification

    Energy Technology Data Exchange (ETDEWEB)

    Ruan, D; Qi, S; DeMarco, J; Kupelian, P; Low, D [UCLA Department of Radiation Oncology, Los Angeles, CA (United States)

    2014-06-15

    Purpose: To develop an efficient Pareto optimality assessment scheme to support plan comparison and practical determination of best-achievable practical treatment plan goals. Methods: Pareto efficiency reflects the tradeoffs among competing target coverage and normal tissue sparing in multi-criterion optimization (MCO) based treatment planning. Assessing and understanding Pareto optimality provides insightful guidance for future planning. However, current MCO-driven Pareto estimation makes relaxed assumptions about the Pareto structure and insufficiently account for practical limitations in beam complexity, leading to performance upper bounds that may be unachievable. This work proposed an alternative data-driven approach that implicitly incorporates the practical limitations, and identifies the Pareto frontier subset by eliminating dominated plans incrementally using the Edgeworth Pareto hull (EPH). The exactness of this elimination process also permits the development of a hierarchical procedure for speedup when the plan cohort size is large, by partitioning the cohort and performing elimination in each subset before a final aggregated elimination. The developed algorithm was first tested on 2D and 3D where accuracy can be reliably assessed. As a specific application, the algorithm was applied to compare systematic plan quality for lower head-and-neck, amongst 4 competing treatment modalities. Results: The algorithm agrees exactly with brute-force pairwise comparison and visual inspection in low dimensions. The hierarchical algorithm shows sqrt(k) folds speedup with k being the number of data points in the plan cohort, demonstrating good efficiency enhancement for heavy testing tasks. Application to plan performance comparison showed superiority of tomotherapy plans for the lower head-and-neck, and revealed a potential nonconvex Pareto frontier structure. Conclusion: An accurate and efficient scheme to identify Pareto frontier from a plan cohort has been

  9. A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization

    Czech Academy of Sciences Publication Activity Database

    Hájek, J.; Szöllös, A.; Šístek, Jakub

    2010-01-01

    Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro-genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451

  10. A new mechanism for maintaining diversity of Pareto archive in multi-objective optimization

    Czech Academy of Sciences Publication Activity Database

    Hájek, J.; Szöllös, A.; Šístek, Jakub

    2010-01-01

    Roč. 41, 7-8 (2010), s. 1031-1057 ISSN 0965-9978 R&D Projects: GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : multi-objective optimization * micro- genetic algorithm * diversity * Pareto archive Subject RIV: BA - General Mathematics Impact factor: 1.004, year: 2010 http://www.sciencedirect.com/science/article/pii/S0965997810000451

  11. Optimal Reinsurance Design for Pareto Optimum: From the Perspective of Multiple Reinsurers

    Directory of Open Access Journals (Sweden)

    Xing Rong

    2016-01-01

    Full Text Available This paper investigates optimal reinsurance strategies for an insurer which cedes the insured risk to multiple reinsurers. Assume that the insurer and every reinsurer apply the coherent risk measures. Then, we find out the necessary and sufficient conditions for the reinsurance market to achieve Pareto optimum; that is, every ceded-loss function and the retention function are in the form of “multiple layers reinsurance.”

  12. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Science.gov (United States)

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Identifying the preferred subset of enzymatic profiles in nonlinear kinetic metabolic models via multiobjective global optimization and Pareto filters.

    Directory of Open Access Journals (Sweden)

    Carlos Pozo

    Full Text Available Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study

  14. Identifying the preferred subset of enzymatic profiles in nonlinear kinetic metabolic models via multiobjective global optimization and Pareto filters.

    Science.gov (United States)

    Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano

    2012-01-01

    Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the

  15. Predicting targeted drug combinations based on Pareto optimal patterns of coexpression network connectivity.

    Science.gov (United States)

    Penrod, Nadia M; Greene, Casey S; Moore, Jason H

    2014-01-01

    Molecularly targeted drugs promise a safer and more effective treatment modality than conventional chemotherapy for cancer patients. However, tumors are dynamic systems that readily adapt to these agents activating alternative survival pathways as they evolve resistant phenotypes. Combination therapies can overcome resistance but finding the optimal combinations efficiently presents a formidable challenge. Here we introduce a new paradigm for the design of combination therapy treatment strategies that exploits the tumor adaptive process to identify context-dependent essential genes as druggable targets. We have developed a framework to mine high-throughput transcriptomic data, based on differential coexpression and Pareto optimization, to investigate drug-induced tumor adaptation. We use this approach to identify tumor-essential genes as druggable candidates. We apply our method to a set of ER(+) breast tumor samples, collected before (n = 58) and after (n = 60) neoadjuvant treatment with the aromatase inhibitor letrozole, to prioritize genes as targets for combination therapy with letrozole treatment. We validate letrozole-induced tumor adaptation through coexpression and pathway analyses in an independent data set (n = 18). We find pervasive differential coexpression between the untreated and letrozole-treated tumor samples as evidence of letrozole-induced tumor adaptation. Based on patterns of coexpression, we identify ten genes as potential candidates for combination therapy with letrozole including EPCAM, a letrozole-induced essential gene and a target to which drugs have already been developed as cancer therapeutics. Through replication, we validate six letrozole-induced coexpression relationships and confirm the epithelial-to-mesenchymal transition as a process that is upregulated in the residual tumor samples following letrozole treatment. To derive the greatest benefit from molecularly targeted drugs it is critical to design combination

  16. A divide-and-conquer approach to determine the Pareto frontier for optimization of protein engineering experiments.

    Science.gov (United States)

    He, Lu; Friedman, Alan M; Bailey-Kellogg, Chris

    2012-03-01

    In developing improved protein variants by site-directed mutagenesis or recombination, there are often competing objectives that must be considered in designing an experiment (selecting mutations or breakpoints): stability versus novelty, affinity versus specificity, activity versus immunogenicity, and so forth. Pareto optimal experimental designs make the best trade-offs between competing objectives. Such designs are not "dominated"; that is, no other design is better than a Pareto optimal design for one objective without being worse for another objective. Our goal is to produce all the Pareto optimal designs (the Pareto frontier), to characterize the trade-offs and suggest designs most worth considering, but to avoid explicitly considering the large number of dominated designs. To do so, we develop a divide-and-conquer algorithm, Protein Engineering Pareto FRontier (PEPFR), that hierarchically subdivides the objective space, using appropriate dynamic programming or integer programming methods to optimize designs in different regions. This divide-and-conquer approach is efficient in that the number of divisions (and thus calls to the optimizer) is directly proportional to the number of Pareto optimal designs. We demonstrate PEPFR with three protein engineering case studies: site-directed recombination for stability and diversity via dynamic programming, site-directed mutagenesis of interacting proteins for affinity and specificity via integer programming, and site-directed mutagenesis of a therapeutic protein for activity and immunogenicity via integer programming. We show that PEPFR is able to effectively produce all the Pareto optimal designs, discovering many more designs than previous methods. The characterization of the Pareto frontier provides additional insights into the local stability of design choices as well as global trends leading to trade-offs between competing criteria. Copyright © 2011 Wiley Periodicals, Inc.

  17. The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space

    Science.gov (United States)

    Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri

    2015-01-01

    When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336

  18. The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.

    Science.gov (United States)

    Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri

    2015-10-01

    When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.

  19. The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space.

    Directory of Open Access Journals (Sweden)

    Pablo Szekely

    2015-10-01

    Full Text Available When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes--phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass.

  20. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    Science.gov (United States)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  1. Multi-objective genetic algorithm optimization of 2D- and 3D-Pareto fronts for vibrational quantum processes

    International Nuclear Information System (INIS)

    Gollub, C; De Vivie-Riedle, R

    2009-01-01

    A multi-objective genetic algorithm is applied to optimize picosecond laser fields, driving vibrational quantum processes. Our examples are state-to-state transitions and unitary transformations. The approach allows features of the shaped laser fields and of the excitation mechanisms to be controlled simultaneously with the quantum yield. Within the parameter range accessible to the experiment, we focus on short pulse durations and low pulse energies to optimize preferably robust laser fields. Multidimensional Pareto fronts for these conflicting objectives could be constructed. Comparison with previous work showed that the solutions from Pareto optimizations and from optimal control theory match very well.

  2. Evaluation of the optimal combinations of modulation factor and pitch for Helical TomoTherapy plans made with TomoEdge using Pareto optimal fronts.

    Science.gov (United States)

    De Kerf, Geert; Van Gestel, Dirk; Mommaerts, Lobke; Van den Weyngaert, Danielle; Verellen, Dirk

    2015-09-17

    Modulation factor (MF) and pitch have an impact on Helical TomoTherapy (HT) plan quality and HT users mostly use vendor-recommended settings. This study analyses the effect of these two parameters on both plan quality and treatment time for plans made with TomoEdge planning software by using the concept of Pareto optimal fronts. More than 450 plans with different combinations of pitch [0.10-0.50] and MF [1.2-3.0] were produced. These HT plans, with a field width (FW) of 5 cm, were created for five head and neck patients and homogeneity index, conformity index, dose-near-maximum (D2), and dose-near-minimum (D98) were analysed for the planning target volumes, as well as the mean dose and D2 for most critical organs at risk. For every dose metric the median value will be plotted against treatment time. A Pareto-like method is used in the analysis which will show how pitch and MF influence both treatment time and plan quality. For small pitches (≤0.20), MF does not influence treatment time. The contrary is true for larger pitches (≥0.25) as lowering MF will both decrease treatment time and plan quality until maximum gantry speed is reached. At this moment, treatment time is saturated and only plan quality will further decrease. The Pareto front analysis showed optimal combinations of pitch [0.23-0.45] and MF > 2.0 for a FW of 5 cm. Outside this range, plans will become less optimal. As the vendor-recommended settings fall within this range, the use of these settings is validated.

  3. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    Science.gov (United States)

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  5. A fast method for calculating reliable event supports in tree reconciliations via Pareto optimality.

    Science.gov (United States)

    To, Thu-Hien; Jacox, Edwin; Ranwez, Vincent; Scornavacca, Celine

    2015-11-14

    Given a gene and a species tree, reconciliation methods attempt to retrieve the macro-evolutionary events that best explain the discrepancies between the two tree topologies. The DTL parsimonious approach searches for a most parsimonious reconciliation between a gene tree and a (dated) species tree, considering four possible macro-evolutionary events (speciation, duplication, transfer, and loss) with specific costs. Unfortunately, many events are erroneously predicted due to errors in the input trees, inappropriate input cost values or because of the existence of several equally parsimonious scenarios. It is thus crucial to provide a measure of the reliability for predicted events. It has been recently proposed that the reliability of an event can be estimated via its frequency in the set of most parsimonious reconciliations obtained using a variety of reasonable input cost vectors. To compute such a support, a straightforward but time-consuming approach is to generate the costs slightly departing from the original ones, independently compute the set of all most parsimonious reconciliations for each vector, and combine these sets a posteriori. Another proposed approach uses Pareto-optimality to partition cost values into regions which induce reconciliations with the same number of DTL events. The support of an event is then defined as its frequency in the set of regions. However, often, the number of regions is not large enough to provide reliable supports. We present here a method to compute efficiently event supports via a polynomial-sized graph, which can represent all reconciliations for several different costs. Moreover, two methods are proposed to take into account alternative input costs: either explicitly providing an input cost range or allowing a tolerance for the over cost of a reconciliation. Our methods are faster than the region based method, substantially faster than the sampling-costs approach, and have a higher event-prediction accuracy on

  6. Pareto-optimal reversed-phase chromatography separation of three insulin variants with a solubility constraint.

    Science.gov (United States)

    Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt

    2018-01-12

    With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3  column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3  column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration

  7. Pareto-optimal electricity tariff rates in the Republic of Armenia

    International Nuclear Information System (INIS)

    Kaiser, M.J.

    2000-01-01

    The economic impact of electricity tariff rates on the residential sector of Yerevan, Armenia, is examined. The effect of tariff design on revenue generation and equity measures is considered, and the combination of energy pricing and compensatory social policies which provides the best mix of efficiency and protection for poor households is examined. An equity measure is defined in terms of a cumulative distribution function which describes the percent of the population that spends x percent or less of their income on electricity consumption. An optimal (Pareto-efficient) tariff is designed based on the analysis of survey data and an econometric model, and the Armenian tariff rate effective 1 January 1997 to 15 September 1997 is shown to be non-optimal relative to this rate. 22 refs

  8. Mapping the Pareto optimal design space for a functionally deimmunized biotherapeutic candidate.

    Science.gov (United States)

    Salvat, Regina S; Parker, Andrew S; Choi, Yoonjoo; Bailey-Kellogg, Chris; Griswold, Karl E

    2015-01-01

    The immunogenicity of biotherapeutics can bottleneck development pipelines and poses a barrier to widespread clinical application. As a result, there is a growing need for improved deimmunization technologies. We have recently described algorithms that simultaneously optimize proteins for both reduced T cell epitope content and high-level function. In silico analysis of this dual objective design space reveals that there is no single global optimum with respect to protein deimmunization. Instead, mutagenic epitope deletion yields a spectrum of designs that exhibit tradeoffs between immunogenic potential and molecular function. The leading edge of this design space is the Pareto frontier, i.e. the undominated variants for which no other single design exhibits better performance in both criteria. Here, the Pareto frontier of a therapeutic enzyme has been designed, constructed, and evaluated experimentally. Various measures of protein performance were found to map a functional sequence space that correlated well with computational predictions. These results represent the first systematic and rigorous assessment of the functional penalty that must be paid for pursuing progressively more deimmunized biotherapeutic candidates. Given this capacity to rapidly assess and design for tradeoffs between protein immunogenicity and functionality, these algorithms may prove useful in augmenting, accelerating, and de-risking experimental deimmunization efforts.

  9. Necessary and Sufficient Conditions for Pareto Optimality in Infinite Horizon Cooperative Differential Games - Replaced by CentER DP 2011-041

    NARCIS (Netherlands)

    Reddy, P.V.; Engwerda, J.C.

    2010-01-01

    In this article we derive necessary and sufficient conditions for the existence of Pareto optimal solutions for an N player cooperative infinite horizon differential game. Firstly, we write the problem of finding Pareto candidates as solving N constrained optimal control subproblems. We derive some

  10. Visualising Pareto-optimal trade-offs helps move beyond monetary-only criteria for water management decisions

    Science.gov (United States)

    Hurford, Anthony; Harou, Julien

    2014-05-01

    Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.

  11. Pareto-Optimization of HTS CICC for High-Current Applications in Self-Field

    Directory of Open Access Journals (Sweden)

    Giordano Tomassetti

    2018-01-01

    Full Text Available The ENEA superconductivity laboratory developed a novel design for Cable-in-Conduit Conductors (CICCs comprised of stacks of 2nd-generation REBCO coated conductors. In its original version, the cable was made up of 150 HTS tapes distributed in five slots, twisted along an aluminum core. In this work, taking advantage of a 2D finite element model, able to estimate the cable’s current distribution in the cross-section, a multiobjective optimization procedure was implemented. The aim of optimization was to simultaneously maximize both engineering current density and total current flowing inside the tapes when operating in self-field, by varying the cross-section layout. Since the optimization process involved both integer and real geometrical variables, the choice of an evolutionary search algorithm was strictly necessary. The use of an evolutionary algorithm in the frame of a multiple objective optimization made it an obliged choice to numerically approach the problem using a nonstandard fast-converging optimization algorithm. By means of this algorithm, the Pareto frontiers for the different configurations were calculated, providing a powerful tool for the designer to achieve the desired preliminary operating conditions in terms of engineering current density and/or total current, depending on the specific application field, that is, power transmission cable and bus bar systems.

  12. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    Science.gov (United States)

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  13. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Satyabrata [ORNL

    2014-01-01

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.

  14. Studies on generalized kinetic model and Pareto optimization of a product-driven self-cycling bioprocess.

    Science.gov (United States)

    Sun, Kaibiao; Kasperski, Andrzej; Tian, Yuan

    2014-10-01

    The aim of this study is the optimization of a product-driven self-cycling bioprocess and presentation of a way to determine the best possible decision variables out of a set of alternatives based on the designed model. Initially, a product-driven generalized kinetic model, which allows a flexible choice of the most appropriate kinetics is designed and analysed. The optimization problem is given as the bi-objective one, where maximization of biomass productivity and minimization of unproductive loss of substrate are the objective functions. Then, the Pareto fronts are calculated for exemplary kinetics. It is found that in the designed bioprocess, a decrease of emptying/refilling fraction and an increase of substrate feeding concentration cause an increase of the biomass productivity. An increase of emptying/refilling fraction and a decrease of substrate feeding concentration cause a decrease of unproductive loss of substrate. The preferred solutions are calculated using the minimum distance from an ideal solution method, while giving proposals of their modifications derived from a decision maker's reactions to the generated solutions.

  15. Statement of Problem of Pareto Frontier Management and Its Solution in the Analysis and Synthesis of Optimal Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are

  16. Characterizing the Incentive Compatible and Pareto Optimal Efficiency Space for Two Players, k Items, Public Budget and Quasilinear Utilities

    Directory of Open Access Journals (Sweden)

    Anat Lerner

    2014-04-01

    Full Text Available We characterize the efficiency space of deterministic, dominant-strategy incentive compatible, individually rational and Pareto-optimal combinatorial auctions in a model with two players and k nonidentical items. We examine a model with multidimensional types, private values and quasilinear preferences for the players with one relaxation: one of the players is subject to a publicly known budget constraint. We show that if it is publicly known that the valuation for the largest bundle is less than the budget for at least one of the players, then Vickrey-Clarke-Groves (VCG uniquely fulfills the basic properties of being deterministic, dominant-strategy incentive compatible, individually rational and Pareto optimal. Our characterization of the efficient space for deterministic budget constrained combinatorial auctions is similar in spirit to that of Maskin 2000 for Bayesian single-item constrained efficiency auctions and comparable with Ausubel and Milgrom 2002 for non-constrained combinatorial auctions.

  17. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    Science.gov (United States)

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape

  18. Modelling and Pareto optimization of mechanical properties of friction stir welded AA7075/AA5083 butt joints using neural network and particle swarm algorithm

    International Nuclear Information System (INIS)

    Shojaeefard, Mohammad Hasan; Behnagh, Reza Abdi; Akbari, Mostafa; Givi, Mohammad Kazem Besharati; Farhani, Foad

    2013-01-01

    Highlights: ► Defect-free friction stir welds have been produced for AA5083-O/AA7075-O. ► Back-propagation was sufficient for predicting hardness and tensile strength. ► A hybrid multi-objective algorithm is proposed to deal with this MOP. ► Multi-objective particle swarm optimization was used to find the Pareto solutions. ► TOPSIS is used to rank the given alternatives of the Pareto solutions. -- Abstract: Friction Stir Welding (FSW) has been successfully used to weld similar and dissimilar cast and wrought aluminium alloys, especially for aircraft aluminium alloys, that generally present with low weldability by the traditional fusion welding process. This paper focuses on the microstructural and mechanical properties of the Friction Stir Welding (FSW) of AA7075-O to AA5083-O aluminium alloys. Weld microstructures, hardness and tensile properties were evaluated in as-welded condition. Tensile tests indicated that mechanical properties of the joint were better than in the base metals. An Artificial Neural Network (ANN) model was developed to simulate the correlation between the Friction Stir Welding parameters and mechanical properties. Performance of the ANN model was excellent and the model was employed to predict the ultimate tensile strength and hardness of butt joint of AA7075–AA5083 as functions of weld and rotational speeds. The multi-objective particle swarm optimization was used to obtain the Pareto-optimal set. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was applied to determine the best compromised solution.

  19. Pareto utility

    NARCIS (Netherlands)

    Ikefuji, M.; Laeven, R.J.A.; Magnus, J.R.; Muris, C.H.M.

    2013-01-01

    In searching for an appropriate utility function in the expected utility framework, we formulate four properties that we want the utility function to satisfy. We conduct a search for such a function, and we identify Pareto utility as a function satisfying all four desired properties. Pareto utility

  20. Pareto printsiip

    Index Scriptorium Estoniae

    2011-01-01

    Itaalia majandusteadlase Vilfredo Pareto jõudmisest oma kuulsa printsiibini ja selle printsiibi mõjust tänapäevasele juhtimisele. Pareto printsiibi kohaselt ei aita suurem osa tegevusest meid tulemuseni jõuda, vaid on aja raiskamine. Diagramm

  1. Approximating convex Pareto surfaces in multiobjective radiotherapy planning

    International Nuclear Information System (INIS)

    Craft, David L.; Halabi, Tarek F.; Shih, Helen A.; Bortfeld, Thomas R.

    2006-01-01

    Radiotherapy planning involves inherent tradeoffs: the primary mission, to treat the tumor with a high, uniform dose, is in conflict with normal tissue sparing. We seek to understand these tradeoffs on a case-to-case basis, by computing for each patient a database of Pareto optimal plans. A treatment plan is Pareto optimal if there does not exist another plan which is better in every measurable dimension. The set of all such plans is called the Pareto optimal surface. This article presents an algorithm for computing well distributed points on the (convex) Pareto optimal surface of a multiobjective programming problem. The algorithm is applied to intensity-modulated radiation therapy inverse planning problems, and results of a prostate case and a skull base case are presented, in three and four dimensions, investigating tradeoffs between tumor coverage and critical organ sparing

  2. Optimal beam margins in linac-based VMAT stereotactic ablative body radiotherapy: a Pareto front analysis for liver metastases.

    Science.gov (United States)

    Cilla, Savino; Ianiro, Anna; Deodato, Francesco; Macchia, Gabriella; Digesù, Cinzia; Valentini, Vincenzo; Morganti, Alessio G

    2017-11-27

    We explored the Pareto fronts mathematical strategy to determine the optimal block margin and prescription isodose for stereotactic body radiotherapy (SBRT) treatments of liver metastases using the volumetric-modulated arc therapy (VMAT) technique. Three targets (planning target volumes [PTVs] = 20, 55, and 101 cc) were selected. A single fraction dose of 26 Gy was prescribed (prescription dose [PD]). VMAT plans were generated for 3 different beam energies. Pareto fronts based on (1) different multileaf collimator (MLC) block margin around PTV and (2) different prescription isodose lines (IDL) were produced. For each block margin, the greatest IDL fulfilling the criteria (95% of PTV reached 100%) was considered as providing the optimal clinical plan for PTV coverage. Liver D mean , V7Gy, and V12Gy were used against the PTV coverage to generate the fronts. Gradient indexes (GI and mGI), homogeneity index (HI), and healthy liver irradiation in terms of D mean , V7Gy, and V12Gy were calculated to compare different plans. In addition, each target was also optimized with a full-inverse planning engine to obtain a direct comparison with anatomy-based treatment planning system (TPS) results. About 900 plans were calculated to generate the fronts. GI and mGI show a U-shaped behavior as a function of beam margin with minimal values obtained with a +1 mm MLC margin. For these plans, the IDL ranges from 74% to 86%. GI and mGI show also a V-shaped behavior with respect to HI index, with minimum values at 1 mm for all metrics, independent of tumor dimensions and beam energy. Full-inversed optimized plans reported worse results with respect to Pareto plans. In conclusion, Pareto fronts provide a rigorous strategy to choose clinical optimal plans in SBRT treatments. We show that a 1-mm MLC block margin provides the best results with regard to healthy liver tissue irradiation and steepness of dose fallout. Copyright © 2017 American Association of Medical Dosimetrists

  3. On quasistability radius of a vector trajectorial problem with a principle of optimality generalizing Pareto and lexicographic principles

    Directory of Open Access Journals (Sweden)

    Sergey E. Bukhtoyarov

    2005-05-01

    Full Text Available A multicriterion linear combinatorial problem with a parametric principle of optimality is considered. This principle is defined by a partitioning of partial criteria onto Pareto preference relation groups within each group and the lexicographic preference relation between them. Quasistability of the problem is investigated. This type of stability is a discrete analog of Hausdorff lower semi-continuity of the multiple-valued mapping that defines the choice function. A formula of quasistability radius is derived for the case of the metric l∞. Some known results are stated as corollaries. Mathematics Subject Classification 2000: 90C05, 90C10, 90C29, 90C31.

  4. Optimal transmitter power of an intersatellite optical communication system with reciprocal Pareto fading.

    Science.gov (United States)

    Liu, Xian

    2010-02-10

    This paper shows that optical signal transmission over intersatellite links with swaying transmitters can be described as an equivalent fading model. In this model, the instantaneous signal-to-noise ratio is stochastic and follows the reciprocal Pareto distribution. With this model, we show that the transmitter power can be minimized, subject to a specified outage probability, by appropriately adjusting some system parameters, such as the transmitter gain.

  5. Design of a Circularly Polarized Galileo E6-Band Textile Antenna by Dedicated Multiobjective Constrained Pareto Optimization

    Directory of Open Access Journals (Sweden)

    Arnaut Dierck

    2015-01-01

    Full Text Available Designing textile antennas for real-life applications requires a design strategy that is able to produce antennas that are optimized over a wide bandwidth for often conflicting characteristics, such as impedance matching, axial ratio, efficiency, and gain, and, moreover, that is able to account for the variations that apply for the characteristics of the unconventional materials used in smart textile systems. In this paper, such a strategy, incorporating a multiobjective constrained Pareto optimization, is presented and applied to the design of a Galileo E6-band antenna with optimal return loss and wide-band axial ratio characteristics. Subsequently, different prototypes of the optimized antenna are fabricated and measured to validate the proposed design strategy.

  6. Evaluation of treatment plan quality of IMRT and VMAT with and without flattening filter using Pareto optimal fronts.

    Science.gov (United States)

    Lechner, Wolfgang; Kragl, Gabriele; Georg, Dietmar

    2013-12-01

    To investigate the differences in treatment plan quality of IMRT and VMAT with and without flattening filter using Pareto optimal fronts, for two treatment sites of different anatomic complexity. Pareto optimal fronts (POFs) were generated for six prostate and head-and-neck cancer patients by stepwise reduction of the constraint (during the optimization process) of the primary organ-at-risk (OAR). 9-static field IMRT and 360°-single-arc VMAT plans with flattening filter (FF) and without flattening filter (FFF) were compared. The volume receiving 5 Gy or more (V5 Gy) was used to estimate the low dose exposure. Furthermore, the number of monitor units (MUs) and measurements of the delivery time (T) were used to assess the efficiency of the treatment plans. A significant increase in MUs was found when using FFF-beams while the treatment plan quality was at least equivalent to the FF-beams. T was decreased by 18% for prostate for IMRT with FFF-beams and by 4% for head-and-neck cases, but increased by 22% and 16% for VMAT. A reduction of up to 5% of V5 Gy was found for IMRT prostate cases with FFF-beams. The evaluation of the POFs showed an at least comparable treatment plan quality of FFF-beams compared to FF-beams for both treatment sites and modalities. For smaller targets the advantageous characteristics of FFF-beams could be better exploited. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. TopN-Pareto Front Search

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-21

    The JMP Add-In TopN-PFS provides an automated tool for finding layered Pareto front to identify the top N solutions from an enumerated list of candidates subject to optimizing multiple criteria. The approach constructs the N layers of Pareto fronts, and then provides a suite of graphical tools to explore the alternatives based on different prioritizations of the criteria. The tool is designed to provide a set of alternatives from which the decision-maker can select the best option for their study goals.

  8. Pareto optimality between width of central lobe and peak sidelobe intensity in the far-field pattern of lossless phase-only filters for enhancement of transverse resolution.

    Science.gov (United States)

    Mukhopadhyay, Somparna; Hazra, Lakshminarayan

    2015-11-01

    Resolution capability of an optical imaging system can be enhanced by reducing the width of the central lobe of the point spread function. Attempts to achieve the same by pupil plane filtering give rise to a concomitant increase in sidelobe intensity. The mutual exclusivity between these two objectives may be considered as a multiobjective optimization problem that does not have a unique solution; rather, a class of trade-off solutions called Pareto optimal solutions may be generated. Pareto fronts in the synthesis of lossless phase-only pupil plane filters to achieve superresolution with prespecified lower limits for the Strehl ratio are explored by using the particle swarm optimization technique.

  9. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques

    International Nuclear Information System (INIS)

    Ottosson, Rickard O.; Sjoestroem, David; Behrens, Claus F.; Karlsson, Anna; Engstroem, Per E.; Knoeoes, Tommy; Ceberg, Crister

    2009-01-01

    Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head and neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered

  10. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques.

    Science.gov (United States)

    Ottosson, Rickard O; Engstrom, Per E; Sjöström, David; Behrens, Claus F; Karlsson, Anna; Knöös, Tommy; Ceberg, Crister

    2009-01-01

    Pareto optimality is a concept that formalises the trade-off between a given set of mutually contradicting objectives. A solution is said to be Pareto optimal when it is not possible to improve one objective without deteriorating at least one of the other. A set of Pareto optimal solutions constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics of a treatment planning system (TPS), treatment strategy or delivery technique, Pareto fronts for a given case are likely to differ. The aim of this study was to investigate the feasibility of using Pareto fronts as a comparative tool for TPSs, treatment strategies and delivery techniques. In order to sample Pareto fronts, multiple treatment plans with varying target conformity and dose sparing of OAR were created for a number of prostate and head & neck IMRT cases. The DVHs of each plan were evaluated with respect to target coverage and dose to relevant OAR. Pareto fronts were successfully created for all studied cases. The results did indeed follow the definition of the Pareto concept, i.e. dose sparing of the OAR could not be improved without target coverage being impaired or vice versa. Furthermore, various treatment techniques resulted in distinguished and well separated Pareto fronts. Pareto fronts may be used to evaluate a number of parameters within radiotherapy. Examples are TPS optimization algorithms, the variation between accelerators or delivery techniques and the degradation of a plan during the treatment planning process. The issue of designing a model for unbiased comparison of parameters with such large inherent discrepancies, e.g. different TPSs, is problematic and should be carefully considered.

  11. Spatial redistribution of irregularly-spaced Pareto fronts for more intuitive navigation and solution selection

    NARCIS (Netherlands)

    A. Bouter (Anton); K. Pirpinia (Kleopatra); T. Alderliesten (Tanja); P.A.N. Bosman (Peter)

    2017-01-01

    textabstractA multi-objective optimization approach is o.en followed by an a posteriori decision-making process, during which the most appropriate solution of the Pareto set is selected by a professional in the .eld. Conventional visualization methods do not correct for Pareto fronts with

  12. The Primary Experiments of an Analysis of Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket Engine

    Science.gov (United States)

    Kudo, Fumiya; Yoshikawa, Tomohiro; Furuhashi, Takeshi

    Recentry, Multi-objective Genetic Algorithm, which is the application of Genetic Algorithm to Multi-objective Optimization Problems is focused on in the engineering design field. In this field, the analysis of design variables in the acquired Pareto solutions, which gives the designers useful knowledge in the applied problem, is important as well as the acquisition of advanced solutions. This paper proposes a new visualization method using Isomap which visualizes the geometric distances of solutions in the design variable space considering their distances in the objective space. The proposed method enables a user to analyze the design variables of the acquired solutions considering their relationship in the objective space. This paper applies the proposed method to the conceptual design optimization problem of hybrid rocket engine and studies the effectiveness of the proposed method.

  13. Finding a pareto-optimal solution for multi-region models subject to capital trade and spillover externalities

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian [Potsdam-Institut fuer Klimafolgenforschung e.V., Potsdam (Germany); Eisenack, Klaus [Oldenburg Univ. (Germany). Dept. of Economics and Statistics

    2008-11-15

    In this paper we present an algorithm that deals with trade interactions within a multi-region model. In contrast to traditional approaches this algorithm is able to handle spillover externalities. Technological spillovers are expected to foster the diffusion of new technologies, which helps to lower the cost of climate change mitigation. We focus on technological spillovers which are due to capital trade. The algorithm of finding a pareto-optimal solution in an intertemporal framework is embedded in a decomposed optimization process. The paper analyzes convergence and equilibrium properties of this algorithm. In the final part of the paper, we apply the algorithm to investigate possible impacts of technological spillovers. While benefits of technological spillovers are significant for the capital-importing region, benefits for the capital-exporting region depend on the type of regional disparities and the resulting specialization and terms-of-trade effects. (orig.)

  14. Numerical investigation of a dual-loop EGR split strategy using a split index and multi-objective Pareto optimization

    International Nuclear Information System (INIS)

    Park, Jungsoo; Song, Soonho; Lee, Kyo Seung

    2015-01-01

    Highlights: • Model-based control of dual-loop EGR system is performed. • EGR split index is developed to provide non-dimensional index for optimization. • EGR rates are calibrated using EGR split index at specific operating conditions. • Multi-objective Pareto optimization is performed to minimize NO X and BSFC. • Optimum split strategies are suggested with LP-rich dual-loop EGR at high load. - Abstract: A proposed dual-loop exhaust-gas recirculation (EGR) system that combines the features of high-pressure (HP) and low-pressure (LP) systems is considered a key technology for improving the combustion behavior of diesel engines. The fraction of HP and LP flows, known as the EGR split, for a given dual-loop EGR rate play an important role in determining the engine performance and emission characteristics. Therefore, identifying the proper EGR split is important for the engine optimization and calibration processes, which affect the EGR response and deNO X efficiencies. The objective of this research was to develop a dual-loop EGR split strategy using numerical analysis and one-dimensional (1D) cycle simulation. A control system was modeled by coupling the 1D cycle simulation and the control logic. An EGR split index was developed to investigate the HP/LP split effects on the engine performance and emissions. Using the model-based control system, a multi-objective Pareto (MOP) analysis was used to minimize the NO X formation and fuel consumption through optimized engine operating parameters. The MOP analysis was performed using a response surface model extracted from Latin hypercube sampling as a fractional factorial design of experiment. By using an LP rich dual-loop EGR, a high EGR rate was attained at low, medium, and high engine speeds, increasing the applicable load ranges compared to base conditions

  15. Application of Pareto optimization method for ontology matching in nuclear reactor domain

    International Nuclear Information System (INIS)

    Meenachi, N. Madurai; Baba, M. Sai

    2017-01-01

    This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.

  16. Application of Pareto optimization method for ontology matching in nuclear reactor domain

    Energy Technology Data Exchange (ETDEWEB)

    Meenachi, N. Madurai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Planning and Human Resource Management Div.; Baba, M. Sai [Indira Gandhi Centre for Atomic Research, HBNI, Tamil Nadu (India). Resources Management Group

    2017-12-15

    This article describes the need for ontology matching and describes the methods to achieve the same. Efforts are put in the implementation of the semantic web based knowledge management system for nuclear domain which necessitated use of the methods for development of ontology matching. In order to exchange information in a distributed environment, ontology mapping has been used. The constraints in matching the ontology are also discussed. Pareto based ontology matching algorithm is used to find the similarity between two ontologies in the nuclear reactor domain. Algorithms like Jaro Winkler distance, Needleman Wunsch algorithm, Bigram, Kull Back and Cosine divergence are employed to demonstrate ontology matching. A case study was carried out to analysis the ontology matching in diversity in the nuclear reactor domain and same was illustrated.

  17. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    International Nuclear Information System (INIS)

    Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi

    2016-01-01

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor

  18. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2016-10-15

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.

  19. Exploring the Environment/Energy Pareto Optimal Front of an Office Room Using Computational Fluid Dynamics-Based Interactive Optimization Method

    Directory of Open Access Journals (Sweden)

    Kangji Li

    2017-02-01

    Full Text Available This paper is concerned with the development of a high-resolution and control-friendly optimization framework in enclosed environments that helps improve thermal comfort, indoor air quality (IAQ, and energy costs of heating, ventilation and air conditioning (HVAC system simultaneously. A computational fluid dynamics (CFD-based optimization method which couples algorithms implemented in Matlab with CFD simulation is proposed. The key part of this method is a data interactive mechanism which efficiently passes parameters between CFD simulations and optimization functions. A two-person office room is modeled for the numerical optimization. The multi-objective evolutionary algorithm—non-dominated-and-crowding Sorting Genetic Algorithm II (NSGA-II—is realized to explore the environment/energy Pareto front of the enclosed space. Performance analysis will demonstrate the effectiveness of the presented optimization method.

  20. Pareto front estimation for decision making.

    Science.gov (United States)

    Giagkiozis, Ioannis; Fleming, Peter J

    2014-01-01

    The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult-mainly due to the computational cost-to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances.

  1. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  2. Optimal design and management of chlorination in drinking water networks: a multi-objective approach using Genetic Algorithms and the Pareto optimality concept

    Science.gov (United States)

    Nouiri, Issam

    2017-11-01

    This paper presents the development of multi-objective Genetic Algorithms to optimize chlorination design and management in drinking water networks (DWN). Three objectives have been considered: the improvement of the chlorination uniformity (healthy objective), the minimization of chlorine booster stations number, and the injected chlorine mass (economic objectives). The problem has been dissociated in medium and short terms ones. The proposed methodology was tested on hypothetical and real DWN. Results proved the ability of the developed optimization tool to identify relationships between the healthy and economic objectives as Pareto fronts. The proposed approach was efficient in computing solutions ensuring better chlorination uniformity while requiring the weakest injected chlorine mass when compared to other approaches. For the real DWN studied, chlorination optimization has been crowned by great improvement of free-chlorine-dosing uniformity and by a meaningful chlorine mass reduction, in comparison with the conventional chlorination.

  3. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  4. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  5. Toward computational screening in heterogeneous catalysis: Pareto-optimal methanation catalysts

    DEFF Research Database (Denmark)

    Andersson, Martin; Bligaard, Thomas; Kustov, Arkadii

    2006-01-01

    Finding the solids that are the best catalysts for a given reaction is a daunting task due to the large number of combinations and structures of multicomponent Surfaces. In addition, it is not only the reaction rate that needs to be optimized: the selectivity. durability. and cost Must also be ta...

  6. Informed multi-objective decision-making in environmental management using Pareto optimality

    Science.gov (United States)

    Maureen C. Kennedy; E. David Ford; Peter Singleton; Mark Finney; James K. Agee

    2008-01-01

    Effective decisionmaking in environmental management requires the consideration of multiple objectives that may conflict. Common optimization methods use weights on the multiple objectives to aggregate them into a single value, neglecting valuable insight into the relationships among the objectives in the management problem.

  7. Research and Setting the Modified Algorithm "Predator-Prey" in the Problem of the Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2016-01-01

    Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.

  8. A bridge network maintenance framework for Pareto optimization of stakeholders/users costs

    International Nuclear Information System (INIS)

    Orcesi, Andre D.; Cremona, Christian F.

    2010-01-01

    For managing highway bridges, stakeholders require efficient and practical decision making techniques. In a context of limited bridge management budget, it is crucial to determine the most effective breakdown of financial resources over the different structures of a bridge network. Bridge management systems (BMSs) have been developed for such a purpose. However, they generally rely on an individual approach. The influence of the position of bridges in the transportation network, the consequences of inadequate service for the network users, due to maintenance actions or bridge failure, are not taken into consideration. Therefore, maintenance strategies obtained with current BMSs do not necessarily lead to an optimal level of service (LOS) of the bridge network for the users of the transportation network. Besides, the assessment of the structural performance of highway bridges usually requires the access to the geometrical and mechanical properties of its components. Such information might not be available for all structures in a bridge network for which managers try to schedule and prioritize maintenance strategies. On the contrary, visual inspections are performed regularly and information is generally available for all structures of the bridge network. The objective of this paper is threefold (i) propose an advanced network-level bridge management system considering the position of each bridge in the transportation network, (ii) use information obtained at visual inspections to assess the performance of bridges, and (iii) compare optimal maintenance strategies, obtained with a genetic algorithm, when considering interests of users and bridge owner either separately as conflicting criteria, or simultaneously as a common interest for the whole community. In each case, safety and serviceability aspects are taken into account in the model when determining optimal strategies. The theoretical and numerical developments are applied on a French bridge network.

  9. Multiobjective optimization of the inspection intervals of a nuclear safety system: A clustering-based framework for reducing the Pareto Front

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a framework is developed for identifying a limited number of representative solutions of a multiobjective optimization problem concerning the inspection intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are first clustered into 'families', which are then synthetically represented by a 'head of the family' solution. Three clustering methods are analyzed. Level Diagrams are then used to represent, analyse and interpret the Pareto Fronts reduced to their head-of-the-family solutions. Two decision situations are considered: without or with decision maker preferences, the latter implying the introduction of a scoring system to rank the solutions with respect to the different objectives: a fuzzy preference assignment is then employed to this purpose. The results of the application of the framework of analysis to the problem of optimizing the inspection intervals of a nuclear power plant safety system show that the clustering-based reduction maintains the Pareto Front shape and relevant characteristics, while making it easier for the decision maker to select the final solution.

  10. Level Diagrams analysis of Pareto Front for multiobjective system redundancy allocation

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2011-01-01

    Reliability-based and risk-informed design, operation, maintenance and regulation lead to multiobjective (multicriteria) optimization problems. In this context, the Pareto Front and Set found in a multiobjective optimality search provide a family of solutions among which the decision maker has to look for the best choice according to his or her preferences. Efficient visualization techniques for Pareto Front and Set analyses are needed for helping decision makers in the selection task. In this paper, we consider the multiobjective optimization of system redundancy allocation and use the recently introduced Level Diagrams technique for graphically representing the resulting Pareto Front and Set. Each objective and decision variable is represented on separate diagrams where the points of the Pareto Front and Set are positioned according to their proximity to ideally optimal points, as measured by a metric of normalized objective values. All diagrams are synchronized across all objectives and decision variables. On the basis of the analysis of the Level Diagrams, we introduce a procedure for reducing the number of solutions in the Pareto Front; from the reduced set of solutions, the decision maker can more easily identify his or her preferred solution.

  11. Multi-objective component sizing of a power-split plug-in hybrid electric vehicle powertrain using Pareto-based natural optimization machines

    Science.gov (United States)

    Mozaffari, Ahmad; Vajedi, Mahyar; Chehresaz, Maryyeh; Azad, Nasser L.

    2016-03-01

    The urgent need to meet increasingly tight environmental regulations and new fuel economy requirements has motivated system science researchers and automotive engineers to take advantage of emerging computational techniques to further advance hybrid electric vehicle and plug-in hybrid electric vehicle (PHEV) designs. In particular, research has focused on vehicle powertrain system design optimization, to reduce the fuel consumption and total energy cost while improving the vehicle's driving performance. In this work, two different natural optimization machines, namely the synchronous self-learning Pareto strategy and the elitism non-dominated sorting genetic algorithm, are implemented for component sizing of a specific power-split PHEV platform with a Toyota plug-in Prius as the baseline vehicle. To do this, a high-fidelity model of the Toyota plug-in Prius is employed for the numerical experiments using the Autonomie simulation software. Based on the simulation results, it is demonstrated that Pareto-based algorithms can successfully optimize the design parameters of the vehicle powertrain.

  12. MO-G-304-04: Generating Well-Dispersed Representations of the Pareto Front for Multi-Criteria Optimization in Radiation Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Kirlik, G; Zhang, H [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To present a novel multi-criteria optimization (MCO) solution approach that generates well-dispersed representation of the Pareto front for radiation treatment planning. Methods: Different algorithms have been proposed and implemented in commercial planning software to generate MCO plans for external-beam radiation therapy. These algorithms consider convex optimization problems. We propose a grid-based algorithm to generate well-dispersed treatment plans over Pareto front. Our method is able to handle nonconvexity in the problem to deal with dose-volume objectives/constraints, biological objectives, such as equivalent uniform dose (EUD), tumor control probability (TCP), normal tissue complication probability (NTCP), etc. In addition, our algorithm is able to provide single MCO plan when clinicians are targeting narrow bounds of objectives for patients. In this situation, usually none of the generated plans were within the bounds and a solution is difficult to identify via manual navigation. We use the subproblem formulation utilized in the grid-based algorithm to obtain a plan within the specified bounds. The subproblem aims to generate a solution that maps into the rectangle defined by the bounds. If such a solution does not exist, it generates the solution closest to the rectangle. We tested our method with 10 locally advanced head and neck cancer cases. Results: 8 objectives were used including 3 different objectives for primary target volume, high-risk and low-risk target volumes, and 5 objectives for each of the organs-at-risk (OARs) (two parotids, spinal cord, brain stem and oral cavity). Given tight bounds, uniform dose was achieved for all targets while as much as 26% improvement was achieved in OAR sparing comparing to clinical plans without MCO and previously proposed MCO method. Conclusion: Our method is able to obtain well-dispersed treatment plans to attain better approximation for convex and nonconvex Pareto fronts. Single treatment plan can

  13. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning

    Energy Technology Data Exchange (ETDEWEB)

    Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H [Department of Optimization, Fraunhofer Institute for Industrial Mathematics (ITWM), Fraunhofer Platz 1, 67663 Kaiserslautern (Germany); Thieke, C, E-mail: katrin.teichert@itwm.fhg.de [Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)

    2011-06-21

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.

  14. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning.

    Science.gov (United States)

    Teichert, K; Süss, P; Serna, J I; Monz, M; Küfer, K H; Thieke, C

    2011-06-21

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g., photons versus protons) than with the classical method of comparing single treatment plans.

  15. Comparative analysis of Pareto surfaces in multi-criteria IMRT planning

    International Nuclear Information System (INIS)

    Teichert, K; Suess, P; Serna, J I; Monz, M; Kuefer, K H; Thieke, C

    2011-01-01

    In the multi-criteria optimization approach to IMRT planning, a given dose distribution is evaluated by a number of convex objective functions that measure tumor coverage and sparing of the different organs at risk. Within this context optimizing the intensity profiles for any fixed set of beams yields a convex Pareto set in the objective space. However, if the number of beam directions and irradiation angles are included as free parameters in the formulation of the optimization problem, the resulting Pareto set becomes more intricate. In this work, a method is presented that allows for the comparison of two convex Pareto sets emerging from two distinct beam configuration choices. For the two competing beam settings, the non-dominated and the dominated points of the corresponding Pareto sets are identified and the distance between the two sets in the objective space is calculated and subsequently plotted. The obtained information enables the planner to decide if, for a given compromise, the current beam setup is optimal. He may then re-adjust his choice accordingly during navigation. The method is applied to an artificial case and two clinical head neck cases. In all cases no configuration is dominating its competitor over the whole Pareto set. For example, in one of the head neck cases a seven-beam configuration turns out to be superior to a nine-beam configuration if the highest priority is the sparing of the spinal cord. The presented method of comparing Pareto sets is not restricted to comparing different beam angle configurations, but will allow for more comprehensive comparisons of competing treatment techniques (e.g. photons versus protons) than with the classical method of comparing single treatment plans.

  16. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B [University Medical Center Utrecht, Utrecht (Netherlands); Breedveld, S; Sharfo, A; Heijmen, B [Erasmus University Medical Center Rotterdam, Rotterdam (Netherlands)

    2016-06-15

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan

  17. SU-F-J-105: Towards a Novel Treatment Planning Pipeline Delivering Pareto- Optimal Plans While Enabling Inter- and Intrafraction Plan Adaptation

    International Nuclear Information System (INIS)

    Kontaxis, C; Bol, G; Lagendijk, J; Raaymakers, B; Breedveld, S; Sharfo, A; Heijmen, B

    2016-01-01

    Purpose: To develop a new IMRT treatment planning methodology suitable for the new generation of MR-linear accelerator machines. The pipeline is able to deliver Pareto-optimal plans and can be utilized for conventional treatments as well as for inter- and intrafraction plan adaptation based on real-time MR-data. Methods: A Pareto-optimal plan is generated using the automated multicriterial optimization approach Erasmus-iCycle. The resulting dose distribution is used as input to the second part of the pipeline, an iterative process which generates deliverable segments that target the latest anatomical state and gradually converges to the prescribed dose. This process continues until a certain percentage of the dose has been delivered. Under a conventional treatment, a Segment Weight Optimization (SWO) is then performed to ensure convergence to the prescribed dose. In the case of inter- and intrafraction adaptation, post-processing steps like SWO cannot be employed due to the changing anatomy. This is instead addressed by transferring the missing/excess dose to the input of the subsequent fraction. In this work, the resulting plans were delivered on a Delta4 phantom as a final Quality Assurance test. Results: A conventional static SWO IMRT plan was generated for two prostate cases. The sequencer faithfully reproduced the input dose for all volumes of interest. For the two cases the mean relative dose difference of the PTV between the ideal input and sequenced dose was 0.1% and −0.02% respectively. Both plans were delivered on a Delta4 phantom and passed the clinical Quality Assurance procedures by achieving 100% pass rate at a 3%/3mm gamma analysis. Conclusion: We have developed a new sequencing methodology capable of online plan adaptation. In this work, we extended the pipeline to support Pareto-optimal input and clinically validated that it can accurately achieve these ideal distributions, while its flexible design enables inter- and intrafraction plan

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  20. Optimization of ultrasonic arrays design and setting using a differential evolution

    International Nuclear Information System (INIS)

    Puel, B.; Chatillon, S.; Calmon, P.; Lesselier, D.

    2011-01-01

    Optimization of both design and setting of phased arrays could be not so easy when they are performed manually via parametric studies. An optimization method based on an Evolutionary Algorithm and numerical simulation is proposed and evaluated. The Randomized Adaptive Differential Evolution has been adapted to meet the specificities of the non-destructive testing applications. In particular, the solution of multi-objective problems is aimed at with the implementation of the concept of pareto-optimal sets of solutions. The algorithm has been implemented and connected to the ultrasonic simulation modules of the CIVA software used as forward model. The efficiency of the method is illustrated on two realistic cases of application: optimization of the position and delay laws of a flexible array inspecting a nozzle, considered as a mono-objective problem; and optimization of the design of a surrounded array and its delay laws, considered as a constrained bi-objective problem. (authors)

  1. Searching for the Pareto frontier in multi-objective protein design.

    Science.gov (United States)

    Nanda, Vikas; Belure, Sandeep V; Shir, Ofer M

    2017-08-01

    The goal of protein engineering and design is to identify sequences that adopt three-dimensional structures of desired function. Often, this is treated as a single-objective optimization problem, identifying the sequence-structure solution with the lowest computed free energy of folding. However, many design problems are multi-state, multi-specificity, or otherwise require concurrent optimization of multiple objectives. There may be tradeoffs among objectives, where improving one feature requires compromising another. The challenge lies in determining solutions that are part of the Pareto optimal set-designs where no further improvement can be achieved in any of the objectives without degrading one of the others. Pareto optimality problems are found in all areas of study, from economics to engineering to biology, and computational methods have been developed specifically to identify the Pareto frontier. We review progress in multi-objective protein design, the development of Pareto optimization methods, and present a specific case study using multi-objective optimization methods to model the tradeoff between three parameters, stability, specificity, and complexity, of a set of interacting synthetic collagen peptides.

  2. Solving multi-objective job shop problem using nature-based algorithms: new Pareto approximation features

    Directory of Open Access Journals (Sweden)

    Jarosław Rudy

    2015-01-01

    Full Text Available In this paper the job shop scheduling problem (JSP with minimizing two criteria simultaneously is considered. JSP is frequently used model in real world applications of combinatorial optimization. Multi-objective job shop problems (MOJSP were rarely studied. We implement and compare two multi-agent nature-based methods, namely ant colony optimization (ACO and genetic algorithm (GA for MOJSP. Both of those methods employ certain technique, taken from the multi-criteria decision analysis in order to establish ranking of solutions. ACO and GA differ in a method of keeping information about previously found solutions and their quality, which affects the course of the search. In result, new features of Pareto approximations provided by said algorithms are observed: aside from the slight superiority of the ACO method the Pareto frontier approximations provided by both methods are disjoint sets. Thus, both methods can be used to search mutually exclusive areas of the Pareto frontier.

  3. Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.

    Science.gov (United States)

    Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O

    2016-06-01

    We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets.

  4. Computing gap free Pareto front approximations with stochastic search algorithms.

    Science.gov (United States)

    Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali

    2010-01-01

    Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.

  5. Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

    OpenAIRE

    Savsani, Vimal; Patel, Vivek; Gadhvi, Bhargav; Tawhid, Mohamed

    2017-01-01

    Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS) algorithm, which is based on the search technique of heat transfer search (HTS) algorithm. MOHTS employs the elitist nondominated sorting and crowding dis...

  6. Pareto fronts in clinical practice for pinnacle.

    Science.gov (United States)

    Janssen, Tomas; van Kesteren, Zdenko; Franssen, Gijs; Damen, Eugène; van Vliet, Corine

    2013-03-01

    Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. To generate the Pareto fronts, we used the native scripting language of Pinnacle(3) (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI(95%)) by 0.02 (P=.005), and the rectal wall V(65 Gy) by 1.1% (P=.008). We showed the feasibility of automatically generating Pareto fronts with Pinnacle(3). Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Pareto Fronts in Clinical Practice for Pinnacle

    International Nuclear Information System (INIS)

    Janssen, Tomas; Kesteren, Zdenko van; Franssen, Gijs; Damen, Eugène; Vliet, Corine van

    2013-01-01

    Purpose: Our aim was to develop a framework to objectively perform treatment planning studies using Pareto fronts. The Pareto front represents all optimal possible tradeoffs among several conflicting criteria and is an ideal tool with which to study the possibilities of a given treatment technique. The framework should require minimal user interaction and should resemble and be applicable to daily clinical practice. Methods and Materials: To generate the Pareto fronts, we used the native scripting language of Pinnacle 3 (Philips Healthcare, Andover, MA). The framework generates thousands of plans automatically from which the Pareto front is generated. As an example, the framework is applied to compare intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy (VMAT) for prostate cancer patients. For each patient and each technique, 3000 plans are generated, resulting in a total of 60,000 plans. The comparison is based on 5-dimensional Pareto fronts. Results: Generating 3000 plans for 10 patients in parallel requires on average 96 h for IMRT and 483 hours for VMAT. Using VMAT, compared to IMRT, the maximum dose of the boost PTV was reduced by 0.4 Gy (P=.074), the mean dose in the anal sphincter by 1.6 Gy (P=.055), the conformity index of the 95% isodose (CI 95% ) by 0.02 (P=.005), and the rectal wall V 65 Gy by 1.1% (P=.008). Conclusions: We showed the feasibility of automatically generating Pareto fronts with Pinnacle 3 . Pareto fronts provide a valuable tool for performing objective comparative treatment planning studies. We compared VMAT with IMRT in prostate patients and found VMAT had a dosimetric advantage over IMRT

  8. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    Science.gov (United States)

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  9. The Genetic-Algorithm-Based Normal Boundary Intersection (GANBI) Method; An Efficient Approach to Pareto Multiobjective Optimization for Engineering Design

    Science.gov (United States)

    2006-05-15

    of different evolutionary approaches to multiobjective optimal design are given by Van Veldhuizen ,7 Van Veldhuizen and Lamont,8 and Zitzler and Thiele...and Machine Learning, Addison-Wesley, Boston, 1989. 7. D. A. Van Veldhuizen , "Multiobjective Evolutionary Algorithms: Classifications, Analyses, and...New Innovations," Ph.D. Dissertation, Air Force Institute of Technology, 1999. 39 8. D. A. Van Veldhuizen and G. B. Lamont, "Multiobjective

  10. Pareto navigation: algorithmic foundation of interactive multi-criteria IMRT planning.

    Science.gov (United States)

    Monz, M; Küfer, K H; Bortfeld, T R; Thieke, C

    2008-02-21

    Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far.

  11. Pareto navigation-algorithmic foundation of interactive multi-criteria IMRT planning

    International Nuclear Information System (INIS)

    Monz, M; Kuefer, K H; Bortfeld, T R; Thieke, C

    2008-01-01

    Inherently, IMRT treatment planning involves compromising between different planning goals. Multi-criteria IMRT planning directly addresses this compromising and thus makes it more systematic. Usually, several plans are computed from which the planner selects the most promising following a certain procedure. Applying Pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan. Pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection' and 'restriction'. The former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans. They are realized as optimization problems on the so-called plan bundle-a set constructed from pre-computed plans. They can be approximately reformulated so that their solution time is a small fraction of a second. Thus, the user can be provided with immediate feedback regarding his or her decisions. Pareto navigation was implemented in the MIRA navigator software and allows real-time manipulation of the current plan and the set of considered plans. The changes are triggered by simple mouse operations on the so-called navigation star and lead to real-time updates of the navigation star and the dose visualizations. Since any Pareto-optimal plan in the plan bundle can be found with just a few navigation operations the MIRA navigator allows a fast and directed plan determination. Besides, the concept allows for a refinement of the plan bundle, thus offering a middle course between single plan computation and multi-criteria optimization. Pareto navigation offers so far unmatched real-time interactions, ease of use and plan variety, setting it apart from the multi-criteria IMRT planning methods proposed so far

  12. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  13. Pareto joint inversion of 2D magnetotelluric and gravity data

    Science.gov (United States)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where

  14. An extension of the directed search domain algorithm to bilevel optimization

    Science.gov (United States)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  15. Trade-off between learning and exploitation: the Pareto-optimal versus evolutionarily stable learning schedule in cumulative cultural evolution.

    Science.gov (United States)

    Wakano, Joe Yuichiro; Miura, Chiaki

    2014-02-01

    Inheritance of culture is achieved by social learning and improvement is achieved by individual learning. To realize cumulative cultural evolution, social and individual learning should be performed in this order in one's life. However, it is not clear whether such a learning schedule can evolve by the maximization of individual fitness. Here we study optimal allocation of lifetime to learning and exploitation in a two-stage life history model under a constant environment. We show that the learning schedule by which high cultural level is achieved through cumulative cultural evolution is unlikely to evolve as a result of the maximization of individual fitness, if there exists a trade-off between the time spent in learning and the time spent in exploiting the knowledge that has been learned in earlier stages of one's life. Collapse of a fully developed culture is predicted by a game-theoretical analysis where individuals behave selfishly, e.g., less learning and more exploiting. The present study suggests that such factors as group selection, the ability of learning-while-working ("on the job training"), or environmental fluctuation might be important in the realization of rapid and cumulative cultural evolution that is observed in humans. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Simultaneous navigation of multiple Pareto surfaces, with an application to multicriteria IMRT planning with multiple beam angle configurations.

    Science.gov (United States)

    Craft, David; Monz, Michael

    2010-02-01

    To introduce a method to simultaneously explore a collection of Pareto surfaces. The method will allow radiotherapy treatment planners to interactively explore treatment plans for different beam angle configurations as well as different treatment modalities. The authors assume a convex optimization setting and represent the Pareto surface for each modality or given beam set by a set of discrete points on the surface. Weighted averages of these discrete points produce a continuous representation of each Pareto surface. The authors calculate a set of Pareto surfaces and use linear programming to navigate across the individual surfaces, allowing switches between surfaces. The switches are organized such that the plan profits in the requested way, while trying to keep the change in dose as small as possible. The system is demonstrated on a phantom pancreas IMRT case using 100 different five beam configurations and a multicriteria formulation with six objectives. The system has intuitive behavior and is easy to control. Also, because the underlying linear programs are small, the system is fast enough to offer real-time exploration for the Pareto surfaces of the given beam configurations. The system presented offers a sound starting point for building clinical systems for multicriteria exploration of different modalities and offers a controllable way to explore hundreds of beam angle configurations in IMRT planning, allowing the users to focus their attention on the dose distribution and treatment planning objectives instead of spending excessive time on the technicalities of delivery.

  17. Multiobjective Optimization Involving Quadratic Functions

    Directory of Open Access Journals (Sweden)

    Oscar Brito Augusto

    2014-01-01

    Full Text Available Multiobjective optimization is nowadays a word of order in engineering projects. Although the idea involved is simple, the implementation of any procedure to solve a general problem is not an easy task. Evolutionary algorithms are widespread as a satisfactory technique to find a candidate set for the solution. Usually they supply a discrete picture of the Pareto front even if this front is continuous. In this paper we propose three methods for solving unconstrained multiobjective optimization problems involving quadratic functions. In the first, for biobjective optimization defined in the bidimensional space, a continuous Pareto set is found analytically. In the second, applicable to multiobjective optimization, a condition test is proposed to check if a point in the decision space is Pareto optimum or not and, in the third, with functions defined in n-dimensional space, a direct noniterative algorithm is proposed to find the Pareto set. Simple problems highlight the suitability of the proposed methods.

  18. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  19. Many-objective thermodynamic optimization of Stirling heat engine

    International Nuclear Information System (INIS)

    Patel, Vivek; Savsani, Vimal; Mudgal, Anurag

    2017-01-01

    This paper presents a rigorous investigation of many-objective (four-objective) thermodynamic optimization of a Stirling heat engine. Many-objective optimization problem is formed by considering maximization of thermal efficiency, power output, ecological function and exergy efficiency. Multi-objective heat transfer search (MOHTS) algorithm is proposed and applied to obtain a set of Pareto-optimal points. Many objective optimization results form a solution in a four dimensional hyper objective space and for visualization it is represented on a two dimension objective space. Thus, results of four-objective optimization are represented by six Pareto fronts in two dimension objective space. These six Pareto fronts are compared with their corresponding two-objective Pareto fronts. Quantitative assessment of the obtained Pareto solutions is reported in terms of spread and the spacing measures. Different decision making approaches such as LINMAP, TOPSIS and fuzzy are used to select a final optimal solution from Pareto optimal set of many-objective optimization. Finally, to reveal the level of conflict between these objectives, distribution of each decision variable in their allowable range is also shown in two dimensional objective spaces. - Highlights: • Many-objective (i.e. four objective) optimization of Stirling engine is investigated. • MOHTS algorithm is introduced and applied to obtain a set of Pareto points. • Comparative results of many-objective and multi-objectives are presented. • Relationship of design variables in many-objective optimization are obtained. • Optimum solution is selected by using decision making approaches.

  20. Vector optimization theory, applications, and extensions

    CERN Document Server

    Jahn, Johannes

    2011-01-01

    This new edition of a key monograph has fresh sections on the work of Edgeworth and Pareto in its presentation in a general setting of the fundamentals and important results of vector optimization. It examines background material, applications and theories.

  1. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison.

    Science.gov (United States)

    van de Schoot, A J A J; Visser, J; van Kesteren, Z; Janssen, T M; Rasch, C R N; Bel, A

    2016-02-21

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D(99%)) and OAR doses (rectum V30Gy; bladder V40Gy). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D(99%), rectum V(30Gy) and bladder V(40Gy) to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D(99%) on average by 0.2 Gy and decreased the median rectum V(30Gy) and median bladder V(40Gy) on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal

  2. Beam configuration selection for robust intensity-modulated proton therapy in cervical cancer using Pareto front comparison

    International Nuclear Information System (INIS)

    Van de Schoot, A J A J; Visser, J; Van Kesteren, Z; Rasch, C R N; Bel, A; Janssen, T M

    2016-01-01

    The Pareto front reflects the optimal trade-offs between conflicting objectives and can be used to quantify the effect of different beam configurations on plan robustness and dose-volume histogram parameters. Therefore, our aim was to develop and implement a method to automatically approach the Pareto front in robust intensity-modulated proton therapy (IMPT) planning. Additionally, clinically relevant Pareto fronts based on different beam configurations will be derived and compared to enable beam configuration selection in cervical cancer proton therapy. A method to iteratively approach the Pareto front by automatically generating robustly optimized IMPT plans was developed. To verify plan quality, IMPT plans were evaluated on robustness by simulating range and position errors and recalculating the dose. For five retrospectively selected cervical cancer patients, this method was applied for IMPT plans with three different beam configurations using two, three and four beams. 3D Pareto fronts were optimized on target coverage (CTV D 99% ) and OAR doses (rectum V 30Gy ; bladder V 40Gy ). Per patient, proportions of non-approved IMPT plans were determined and differences between patient-specific Pareto fronts were quantified in terms of CTV D 99% , rectum V 30Gy and bladder V 40Gy to perform beam configuration selection. Per patient and beam configuration, Pareto fronts were successfully sampled based on 200 IMPT plans of which on average 29% were non-approved plans. In all patients, IMPT plans based on the 2-beam set-up were completely dominated by plans with the 3-beam and 4-beam configuration. Compared to the 3-beam set-up, the 4-beam set-up increased the median CTV D 99% on average by 0.2 Gy and decreased the median rectum V 30Gy and median bladder V 40Gy on average by 3.6% and 1.3%, respectively. This study demonstrates a method to automatically derive Pareto fronts in robust IMPT planning. For all patients, the defined four-beam configuration was found optimal in

  3. Rayleigh Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Kareema ‎ Abed Al-Kadim

    2017-12-01

    Full Text Available In this paper Rayleigh Pareto distribution have  introduced denote by( R_PD. We stated some  useful functions. Therefor  we  give some of its properties like the entropy function, mean, mode, median , variance , the r-th moment about the mean, the rth moment about the origin, reliability, hazard functions, coefficients of variation, of sekeness and of kurtosis. Finally, we estimate the parameters  so the aim of this search  is to introduce a new distribution

  4. Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning

    International Nuclear Information System (INIS)

    Serna, J I; Monz, M; Kuefer, K H; Thieke, C

    2009-01-01

    One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.

  5. Trade-off bounds for the Pareto surface approximation in multi-criteria IMRT planning.

    Science.gov (United States)

    Serna, J I; Monz, M; Küfer, K H; Thieke, C

    2009-10-21

    One approach to multi-criteria IMRT planning is to automatically calculate a data set of Pareto-optimal plans for a given planning problem in a first phase, and then interactively explore the solution space and decide on the clinically best treatment plan in a second phase. The challenge of computing the plan data set is to ensure that all clinically meaningful plans are covered and that as many clinically irrelevant plans as possible are excluded to keep computation times within reasonable limits. In this work, we focus on the approximation of the clinically relevant part of the Pareto surface, the process that constitutes the first phase. It is possible that two plans on the Pareto surface have a small, clinically insignificant difference in one criterion and a significant difference in another criterion. For such cases, only the plan that is clinically clearly superior should be included into the data set. To achieve this during the Pareto surface approximation, we propose to introduce bounds that restrict the relative quality between plans, the so-called trade-off bounds. We show how to integrate these trade-off bounds into the approximation scheme and study their effects. The proposed scheme is applied to two artificial cases and one clinical case of a paraspinal tumor. For all cases, the quality of the Pareto surface approximation is measured with respect to the number of computed plans, and the range of values occurring in the approximation for different criteria is compared. Through enforcing trade-off bounds, the scheme disregards clinically irrelevant plans during the approximation. Thereby, the number of plans necessary to achieve a good approximation quality can be significantly reduced. Thus, trade-off bounds are an effective tool to focus the planning and to reduce computation time.

  6. Monopoly, Pareto and Ramsey mark-ups

    OpenAIRE

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear demand or, even within the constant elasticity framework, dependence is introduced. The analysis provides a single Generalized Inverse Elasticity Rule for the problems of monopoly, Pareto and Ramsey.

  7. Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar

    Directory of Open Access Journals (Sweden)

    Graham V. Weinberg

    2012-01-01

    Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.

  8. GENERALIZED DOUBLE PARETO SHRINKAGE.

    Science.gov (United States)

    Armagan, Artin; Dunson, David B; Lee, Jaeyong

    2013-01-01

    We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t -like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.

  9. Kullback-Leibler divergence and the Pareto-Exponential approximation.

    Science.gov (United States)

    Weinberg, G V

    2016-01-01

    Recent radar research interests in the Pareto distribution as a model for X-band maritime surveillance radar clutter returns have resulted in analysis of the asymptotic behaviour of this clutter model. In particular, it is of interest to understand when the Pareto distribution is well approximated by an Exponential distribution. The justification for this is that under the latter clutter model assumption, simpler radar detection schemes can be applied. An information theory approach is introduced to investigate the Pareto-Exponential approximation. By analysing the Kullback-Leibler divergence between the two distributions it is possible to not only assess when the approximation is valid, but to determine, for a given Pareto model, the optimal Exponential approximation.

  10. Optimal timing for intravenous administration set replacement.

    Science.gov (United States)

    Gillies, D; O'Riordan, L; Wallen, M; Morrison, A; Rankin, K; Nagy, S

    2005-10-19

    Administration of intravenous therapy is a common occurrence within the hospital setting. Routine replacement of administration sets has been advocated to reduce intravenous infusion contamination. If decreasing the frequency of changing intravenous administration sets does not increase infection rates, a change in practice could result in considerable cost savings. The objective of this review was to identify the optimal interval for the routine replacement of intravenous administration sets when infusate or parenteral nutrition (lipid and non-lipid) solutions are administered to people in hospital via central or peripheral venous catheters. We searched The Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, EMBASE: all from inception to February 2004; reference lists of identified trials, and bibliographies of published reviews. We also contacted researchers in the field. We did not have a language restriction. We included all randomized or quasi-randomized controlled trials addressing the frequency of replacing intravenous administration sets when parenteral nutrition (lipid and non-lipid containing solutions) or infusions (excluding blood) were administered to people in hospital via a central or peripheral catheter. Two authors assessed all potentially relevant studies. We resolved disagreements between the two authors by discussion with a third author. We collected data for the outcomes; infusate contamination; infusate-related bloodstream infection; catheter contamination; catheter-related bloodstream infection; all-cause bloodstream infection and all-cause mortality. We identified 23 references for review. We excluded eight of these studies; five because they did not fit the inclusion criteria and three because of inadequate data. We extracted data from the remaining 15 references (13 studies) with 4783 participants. We conclude that there is no evidence that changing intravenous administration sets more often than every 96 hours

  11. Multiclass gene selection using Pareto-fronts.

    Science.gov (United States)

    Rajapakse, Jagath C; Mundra, Piyushkumar A

    2013-01-01

    Filter methods are often used for selection of genes in multiclass sample classification by using microarray data. Such techniques usually tend to bias toward a few classes that are easily distinguishable from other classes due to imbalances of strong features and sample sizes of different classes. It could therefore lead to selection of redundant genes while missing the relevant genes, leading to poor classification of tissue samples. In this manuscript, we propose to decompose multiclass ranking statistics into class-specific statistics and then use Pareto-front analysis for selection of genes. This alleviates the bias induced by class intrinsic characteristics of dominating classes. The use of Pareto-front analysis is demonstrated on two filter criteria commonly used for gene selection: F-score and KW-score. A significant improvement in classification performance and reduction in redundancy among top-ranked genes were achieved in experiments with both synthetic and real-benchmark data sets.

  12. A Pareto Algorithm for Efficient De Novo Design of Multi-functional Molecules.

    Science.gov (United States)

    Daeyaert, Frits; Deem, Micheal W

    2017-01-01

    We have introduced a Pareto sorting algorithm into Synopsis, a de novo design program that generates synthesizable molecules with desirable properties. We give a detailed description of the algorithm and illustrate its working in 2 different de novo design settings: the design of putative dual and selective FGFR and VEGFR inhibitors, and the successful design of organic structure determining agents (OSDAs) for the synthesis of zeolites. We show that the introduction of Pareto sorting not only enables the simultaneous optimization of multiple properties but also greatly improves the performance of the algorithm to generate molecules with hard-to-meet constraints. This in turn allows us to suggest approaches to address the problem of false positive hits in de novo structure based drug design by introducing structural and physicochemical constraints in the designed molecules, and by forcing essential interactions between these molecules and their target receptor. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Vibration behavior optimization of planetary gear sets

    Directory of Open Access Journals (Sweden)

    Farshad Shakeri Aski

    2014-12-01

    Full Text Available This paper presents a global optimization method focused on planetary gear vibration reduction by means of tip relief profile modifications. A nonlinear dynamic model is used to study the vibration behavior. In order to investigate the optimal radius and amplitude, Brute Force method optimization is used. One approach in optimization is straightforward and requires considerable computation power: brute force methods try to calculate all possible solutions and decide afterwards which one is the best. Results show the influence of optimal profile on planetary gear vibrations.

  14. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  15. Multi-objective Optimization Strategies Using Adjoint Method and Game Theory in Aerodynamics

    Science.gov (United States)

    Tang, Zhili

    2006-08-01

    There are currently three different game strategies originated in economics: (1) Cooperative games (Pareto front), (2) Competitive games (Nash game) and (3) Hierarchical games (Stackelberg game). Each game achieves different equilibria with different performance, and their players play different roles in the games. Here, we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multi-criteria aerodynamic optimization problems. The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments. We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method. The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front. Non-dominated Pareto front solutions are obtained, however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.

  16. Multi-objective optimization strategies using adjoint method and game theory in aerodynamics

    Institute of Scientific and Technical Information of China (English)

    Zhili Tang

    2006-01-01

    There are currently three different game strategies originated in economics:(1) Cooperative games (Pareto front),(2)Competitive games (Nash game) and (3)Hierarchical games (Stackelberg game).Each game achieves different equilibria with different performance,and their players play different roles in the games.Here,we introduced game concept into aerodynamic design, and combined it with adjoint method to solve multicriteria aerodynamic optimization problems.The performance distinction of the equilibria of these three game strategies was investigated by numerical experiments.We computed Pareto front, Nash and Stackelberg equilibria of the same optimization problem with two conflicting and hierarchical targets under different parameterizations by using the deterministic optimization method.The numerical results show clearly that all the equilibria solutions are inferior to the Pareto front.Non-dominated Pareto front solutions are obtained,however the CPU cost to capture a set of solutions makes the Pareto front an expensive tool to the designer.

  17. On the Truncated Pareto Distribution with applications

    OpenAIRE

    Zaninetti, Lorenzo; Ferraro, Mario

    2008-01-01

    The Pareto probability distribution is widely applied in different fields such us finance, physics, hydrology, geology and astronomy. This note deals with an application of the Pareto distribution to astrophysics and more precisely to the statistical analysis of mass of stars and of diameters of asteroids. In particular a comparison between the usual Pareto distribution and its truncated version is presented. Finally a possible physical mechanism that produces Pareto tails for the distributio...

  18. Record Values of a Pareto Distribution.

    Science.gov (United States)

    Ahsanullah, M.

    The record values of the Pareto distribution, labelled Pareto (II) (alpha, beta, nu), are reviewed. The best linear unbiased estimates of the parameters in terms of the record values are provided. The prediction of the sth record value based on the first m (s>m) record values are obtained. A classical Pareto distribution provides reasonably…

  19. A Visualization Technique for Accessing Solution Pool in Interactive Methods of Multiobjective Optimization

    OpenAIRE

    Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga

    2015-01-01

    Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...

  20. Sulcal set optimization for cortical surface registration.

    Science.gov (United States)

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  1. Optimization of well field management

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine

    Groundwater is a limited but important resource for fresh water supply. Differ- ent conflicting objectives are important when operating a well field. This study investigates how the management of a well field can be improved with respect to different objectives simultaneously. A framework...... for optimizing well field man- agement using multi-objective optimization is developed. The optimization uses the Strength Pareto Evolutionary Algorithm 2 (SPEA2) to find the Pareto front be- tween the conflicting objectives. The Pareto front is a set of non-inferior optimal points and provides an important tool...... for the decision-makers. The optimization framework is tested on two case studies. Both abstract around 20,000 cubic meter of water per day, but are otherwise rather different. The first case study concerns the management of Hardhof waterworks, Switzer- land, where artificial infiltration of river water...

  2. Set-valued optimization an introduction with applications

    CERN Document Server

    Khan, Akhtar A; Zalinescu, Constantin

    2014-01-01

    Set-valued optimization is a vibrant and expanding branch of mathematics that deals with optimization problems where the objective map and/or the constraints maps are set-valued maps acting between certain spaces. Since set-valued maps subsumes single valued maps, set-valued optimization provides an important extension and unification of the scalar as well as the vector optimization problems. Therefore this relatively new discipline has justifiably attracted a great deal of attention in recent years. This book presents, in a unified framework, basic properties on ordering relations, solution c

  3. Pareto law and Pareto index in the income distribution of Japanese companies

    OpenAIRE

    Ishikawa, Atushi

    2004-01-01

    In order to study the phenomenon in detail that income distribution follows Pareto law, we analyze the database of high income companies in Japan. We find a quantitative relation between the average capital of the companies and the Pareto index. The larger the average capital becomes, the smaller the Pareto index becomes. From this relation, we can possibly explain that the Pareto index of company income distribution hardly changes, while the Pareto index of personal income distribution chang...

  4. Optimal timing for intravascular administration set replacement.

    Science.gov (United States)

    Ullman, Amanda J; Cooke, Marie L; Gillies, Donna; Marsh, Nicole M; Daud, Azlina; McGrail, Matthew R; O'Riordan, Elizabeth; Rickard, Claire M

    2013-09-15

    The tubing (administration set) attached to both venous and arterial catheters may contribute to bacteraemia and other infections. The rate of infection may be increased or decreased by routine replacement of administration sets. This review was originally published in 2005 and was updated in 2012. The objective of this review was to identify any relationship between the frequency with which administration sets are replaced and rates of microbial colonization, infection and death. We searched The Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2012, Issue 6), MEDLINE (1950 to June 2012), CINAHL (1982 to June 2012), EMBASE (1980 to June 2012), reference lists of identified trials and bibliographies of published reviews. The original search was performed in February 2004. We also contacted researchers in the field. We applied no language restriction. We included all randomized or controlled clinical trials on the frequency of venous or arterial catheter administration set replacement in hospitalized participants. Two review authors assessed all potentially relevant studies. We resolved disagreements between the two review authors by discussion with a third review author. We collected data for seven outcomes: catheter-related infection; infusate-related infection; infusate microbial colonization; catheter microbial colonization; all-cause bloodstream infection; mortality; and cost. We pooled results from studies that compared different frequencies of administration set replacement, for instance, we pooled studies that compared replacement ≥ every 96 hours versus every 72 hours with studies that compared replacement ≥ every 48 hours versus every 24 hours. We identified 26 studies for this updated review, 10 of which we excluded; six did not fulfil the inclusion criteria and four did not report usable data. We extracted data from the remaining 18 references (16 studies) with 5001 participants: study designs included neonate and adult

  5. A buffer material optimal design in the radioactive wastes geological disposal using the satisficing trade-off method and the self-organizing map

    International Nuclear Information System (INIS)

    Okamoto, Takashi; Hanaoka, Yuya; Aiyoshi, Eitaro; Kobayashi, Yoko

    2012-01-01

    In this paper, we consider a multi-objective optimization method in order to obtain a preferred solution for the buffer material optimal design problem in the high-level radioactive wastes geological disposal. The buffer material optimal design problem is formulated as a constrained multi-objective optimization problem. Its Pareto optimal solutions are distributed evenly on whole bounds of the feasible region. Hence, we develop a search method to find a preferred solution easily for a decision maker from the Pareto optimal solutions which are distributed evenly and vastly. In the preferred solution search method, the visualization technique of a Pareto optimal solution set using the self-organizing map is introduced into the satisficing trade-off method which is the interactive method to obtain a Pareto optimal solution that satisfies a decision maker. We confirm the effectiveness of the preferred solution search method in the buffer material optimal design problem. (author)

  6. Collimator setting optimization in intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Williams, M.; Hoban, P.

    2001-01-01

    Full text: The aim of this study was to investigate the role of collimator angle and bixel size settings in IMRT when using the step and shoot method of delivery. Of particular interest is minimisation of the total monitor units delivered. Beam intensity maps with bixel size 10 x 10 mm were segmented into MLC leaf sequences and the collimator angle optimised to minimise the total number of MU's. The monitor units were estimated from the maximum sum of positive-gradient intensity changes along the direction of leaf motion. To investigate the use of low resolution maps at optimum collimator angles, several high resolution maps with bixel size 5 x 5 mm were generated. These were resampled into bixel sizes, 5 x 10 mm and 10 x 10 mm and the collimator angle optimised to minimise the RMS error between the original and resampled map. Finally, a clinical IMRT case was investigated with the collimator angle optimised. Both the dose distribution and dose-volume histograms were compared between the standard IMRT plan and the optimised plan. For the 10 x 10 mm bixel maps there was a variation of 5% - 40% in monitor units at the different collimator angles. The maps with a high degree of radial symmetry showed little variation. For the resampled 5 x 5 mm maps, a small RMS error was achievable with a 5 x 10 mm bixel size at particular collimator positions. This was most noticeable for maps with an elongated intensity distribution. A comparison between the 5 x 5 mm bixel plan and the 5 x 10 mm showed no significant difference in dose distribution. The monitor units required to deliver an intensity modulated field can be reduced by rotating the collimator and aligning the direction of leaf motion with the axis of the fluence map that has the least intensity. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  7. Elitism set based particle swarm optimization and its application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2017-01-01

    Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.

  8. Pareto Efficient Solutions of Attack-Defence Trees

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Nielson, Flemming

    2015-01-01

    Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes, such as proba......Attack-defence trees are a promising approach for representing threat scenarios and possible countermeasures in a concise and intuitive manner. An attack-defence tree describes the interaction between an attacker and a defender, and is evaluated by assigning parameters to the nodes......, such as probability or cost of attacks and defences. In case of multiple parameters most analytical methods optimise one parameter at a time, e.g., minimise cost or maximise probability of an attack. Such methods may lead to sub-optimal solutions when optimising conflicting parameters, e.g., minimising cost while...... maximising probability. In order to tackle this challenge, we devise automated techniques that optimise all parameters at once. Moreover, in the case of conflicting parameters our techniques compute the set of all optimal solutions, defined in terms of Pareto efficiency. The developments are carried out...

  9. Existence of pareto equilibria for multiobjective games without compactness

    OpenAIRE

    Shiraishi, Yuya; Kuroiwa, Daishi

    2013-01-01

    In this paper, we investigate the existence of Pareto and weak Pareto equilibria for multiobjective games without compactness. By employing an existence theorem of Pareto equilibria due to Yu and Yuan([10]), several existence theorems of Pareto and weak Pareto equilibria for the multiobjective games are established in a similar way to Flores-B´azan.

  10. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  11. Optimal Set-Point Synthesis in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2007-01-01

    This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....

  12. Global Optimization for Transport Network Expansion and Signal Setting

    OpenAIRE

    Liu, Haoxiang; Wang, David Z. W.; Yue, Hao

    2015-01-01

    This paper proposes a model to address an urban transport planning problem involving combined network design and signal setting in a saturated network. Conventional transport planning models usually deal with the network design problem and signal setting problem separately. However, the fact that network capacity design and capacity allocation determined by network signal setting combine to govern the transport network performance requires the optimal transport planning to consider the two pr...

  13. The exponentiated generalized Pareto distribution | Adeyemi | Ife ...

    African Journals Online (AJOL)

    Recently Gupta et al. (1998) introduced the exponentiated exponential distribution as a generalization of the standard exponential distribution. In this paper, we introduce a three-parameter generalized Pareto distribution, the exponentiated generalized Pareto distribution (EGP). We present a comprehensive treatment of the ...

  14. Multi-objective optimal strategy for generating and bidding in the power market

    International Nuclear Information System (INIS)

    Peng Chunhua; Sun Huijuan; Guo Jianfeng; Liu Gang

    2012-01-01

    Highlights: ► A new benefit/risk/emission comprehensive generation optimization model is established. ► A hybrid multi-objective differential evolution optimization algorithm is designed. ► Fuzzy set theory and entropy weighting method are employed to extract the general best solution. ► The proposed approach of generating and bidding is efficient for maximizing profit and minimizing both risk and emissions. - Abstract: Based on the coordinated interaction between units output and electricity market prices, the benefit/risk/emission comprehensive generation optimization model with objectives of maximal profit and minimal bidding risk and emissions is established. A hybrid multi-objective differential evolution optimization algorithm, which successfully integrates Pareto non-dominated sorting with differential evolution algorithm and improves individual crowding distance mechanism and mutation strategy to avoid premature and unevenly search, is designed to achieve Pareto optimal set of this model. Moreover, fuzzy set theory and entropy weighting method are employed to extract one of the Pareto optimal solutions as the general best solution. Several optimization runs have been carried out on different cases of generation bidding and scheduling. The results confirm the potential and effectiveness of the proposed approach in solving the multi-objective optimization problem of generation bidding and scheduling. In addition, the comparison with the classical optimization algorithms demonstrates the superiorities of the proposed algorithm such as integrality of Pareto front, well-distributed Pareto-optimal solutions, high search speed.

  15. Global Optimization for Bus Line Timetable Setting Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2014-01-01

    Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.

  16. Vector optimization set-valued and variational analysis

    CERN Document Server

    Chen, Guang-ya; Yang, Xiaogi

    2005-01-01

    This book is devoted to vector or multiple criteria approaches in optimization. Topics covered include: vector optimization, vector variational inequalities, vector variational principles, vector minmax inequalities and vector equilibrium problems. In particular, problems with variable ordering relations and set-valued mappings are treated. The nonlinear scalarization method is extensively used throughout the book to deal with various vector-related problems. The results presented are original and should be interesting to researchers and graduates in applied mathematics and operations research

  17. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  18. Set optimization and applications the state of the art : from set relations to set-valued risk measures

    CERN Document Server

    Heyde, Frank; Löhne, Andreas; Rudloff, Birgit; Schrage, Carola

    2015-01-01

    This volume presents five surveys with extensive bibliographies and six original contributions on set optimization and its applications in mathematical finance and game theory. The topics range from more conventional approaches that look for minimal/maximal elements with respect to vector orders or set relations, to the new complete-lattice approach that comprises a coherent solution concept for set optimization problems, along with existence results, duality theorems, optimality conditions, variational inequalities and theoretical foundations for algorithms. Modern approaches to scalarization methods can be found as well as a fundamental contribution to conditional analysis. The theory is tailor-made for financial applications, in particular risk evaluation and [super-]hedging for market models with transaction costs, but it also provides a refreshing new perspective on vector optimization. There is no comparable volume on the market, making the book an invaluable resource for researchers working in vector o...

  19. Optimal regional biases in ECB interest rate setting

    NARCIS (Netherlands)

    Arnold, I.J.M.

    2005-01-01

    This paper uses a simple model of optimal monetary policy to consider whether the influence of national output and inflation rates on ECB interest rate setting should equal a country’s weight in the eurozone economy. The findings depend on assumptions regarding interest rate elasticities, exchange

  20. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  1. Constructing DNA Barcode Sets Based on Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Bin; Zheng, Xuedong; Zhou, Shihua; Zhou, Changjun; Wei, Xiaopeng; Zhang, Qiang; Wei, Ziqi

    2018-01-01

    Following the completion of the human genome project, a large amount of high-throughput bio-data was generated. To analyze these data, massively parallel sequencing, namely next-generation sequencing, was rapidly developed. DNA barcodes are used to identify the ownership between sequences and samples when they are attached at the beginning or end of sequencing reads. Constructing DNA barcode sets provides the candidate DNA barcodes for this application. To increase the accuracy of DNA barcode sets, a particle swarm optimization (PSO) algorithm has been modified and used to construct the DNA barcode sets in this paper. Compared with the extant results, some lower bounds of DNA barcode sets are improved. The results show that the proposed algorithm is effective in constructing DNA barcode sets.

  2. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  3. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

    Directory of Open Access Journals (Sweden)

    Muhammad Murtadha Othman

    2015-01-01

    Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  4. Multi-objective Reactive Power Optimization Based on Improved Particle Swarm Algorithm

    Science.gov (United States)

    Cui, Xue; Gao, Jian; Feng, Yunbin; Zou, Chenlu; Liu, Huanlei

    2018-01-01

    In this paper, an optimization model with the minimum active power loss and minimum voltage deviation of node and maximum static voltage stability margin as the optimization objective is proposed for the reactive power optimization problems. By defining the index value of reactive power compensation, the optimal reactive power compensation node was selected. The particle swarm optimization algorithm was improved, and the selection pool of global optimal and the global optimal of probability (p-gbest) were introduced. A set of Pareto optimal solution sets is obtained by this algorithm. And by calculating the fuzzy membership value of the pareto optimal solution sets, individuals with the smallest fuzzy membership value were selected as the final optimization results. The above improved algorithm is used to optimize the reactive power of IEEE14 standard node system. Through the comparison and analysis of the results, it has been proven that the optimization effect of this algorithm was very good.

  5. Practical solutions for multi-objective optimization: An application to system reliability design problems

    International Nuclear Information System (INIS)

    Taboada, Heidi A.; Baheranwala, Fatema; Coit, David W.; Wattanapongsakorn, Naruemon

    2007-01-01

    For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set

  6. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  7. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  8. Generalized Pareto optimum and semi-classical spinors

    Science.gov (United States)

    Rouleux, M.

    2018-02-01

    In 1971, S. Smale presented a generalization of Pareto optimum he called the critical Pareto set. The underlying motivation was to extend Morse theory to several functions, i.e. to find a Morse theory for m differentiable functions defined on a manifold M of dimension ℓ. We use this framework to take a 2 × 2 Hamiltonian ℋ = ℋ(p) ∈ 2 C ∞(T * R 2) to its normal form near a singular point of the Fresnel surface. Namely we say that ℋ has the Pareto property if it decomposes, locally, up to a conjugation with regular matrices, as ℋ(p) = u ‧(p)C(p)(u ‧(p))*, where u : R 2 → R 2 has singularities of codimension 1 or 2, and C(p) is a regular Hermitian matrix (“integrating factor”). In particular this applies in certain cases to the matrix Hamiltonian of Elasticity theory and its (relative) perturbations of order 3 in momentum at the origin.

  9. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  10. Optimal Set Anode Potentials Vary in Bioelectrochemical Systems

    KAUST Repository

    Wagner, Rachel C.

    2010-08-15

    In bioelectrochemical systems (BESs), the anode potential can be set to a fixed voltage using a potentiostat, but there is no accepted method for defining an optimal potential. Microbes can theoretically gain more energy by reducing a terminal electron acceptor with a more positive potential, for example oxygen compared to nitrate. Therefore, more positive anode potentials should allow microbes to gain more energy per electron transferred than a lower potential, but this can only occur if the microbe has metabolic pathways capable of capturing the available energy. Our review of the literature shows that there is a general trend of improved performance using more positive potentials, but there are several notable cases where biofilm growth and current generation improved or only occurred at more negative potentials. This suggests that even with diverse microbial communities, it is primarily the potential of the terminal respiratory proteins used by certain exoelectrogenic bacteria, and to a lesser extent the anode potential, that determines the optimal growth conditions in the reactor. Our analysis suggests that additional bioelectrochemical investigations of both pure and mixed cultures, over a wide range of potentials, are needed to better understand how to set and evaluate optimal anode potentials for improving BES performance. © 2010 American Chemical Society.

  11. Optimization Settings in the Fuzzy Combined Mamdani PID Controller

    Science.gov (United States)

    Kudinov, Y. I.; Pashchenko, F. F.; Pashchenko, A. F.; Kelina, A. Y.; Kolesnikov, V. A.

    2017-11-01

    In the present work the actual problem of determining the optimal settings of fuzzy parallel proportional-integral-derivative (PID) controller is considered to control nonlinear plants that is not always possible to perform with classical linear PID controllers. In contrast to the linear fuzzy PID controllers there are no analytical methods of settings calculation. In this paper, we develop a numerical optimization approach to determining the coefficients of a fuzzy PID controller. Decomposition method of optimization is proposed, the essence of which was as follows. All homogeneous coefficients were distributed to the relevant groups, for example, three error coefficients, the three coefficients of the changes of errors and the three coefficients of the outputs P, I and D components. Consistently in each of such groups the search algorithm was selected that has determined the coefficients under which we receive the schedule of the transition process satisfying all the applicable constraints. Thus, with the help of Matlab and Simulink in a reasonable time were found the factors of a fuzzy PID controller, which meet the accepted limitations on the transition process.

  12. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  13. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  14. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  15. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. Robust bayesian inference of generalized Pareto distribution ...

    African Journals Online (AJOL)

    En utilisant une etude exhaustive de Monte Carlo, nous prouvons que, moyennant une fonction perte generalisee adequate, on peut construire un estimateur Bayesien robuste du modele. Key words: Bayesian estimation; Extreme value; Generalized Fisher information; Gener- alized Pareto distribution; Monte Carlo; ...

  17. Axiomatizations of Pareto Equilibria in Multicriteria Games

    NARCIS (Netherlands)

    Voorneveld, M.; Vermeulen, D.; Borm, P.E.M.

    1997-01-01

    We focus on axiomatizations of the Pareto equilibrium concept in multicriteria games based on consistency.Axiomatizations of the Nash equilibrium concept by Peleg and Tijs (1996) and Peleg, Potters, and Tijs (1996) have immediate generalizations.The axiomatization of Norde et al.(1996) cannot be

  18. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Science.gov (United States)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  19. Estimation of the shape parameter of a generalized Pareto distribution based on a transformation to Pareto distributed variables

    OpenAIRE

    van Zyl, J. Martin

    2012-01-01

    Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...

  20. Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Bokrantz, Rasmus

    2013-01-01

    We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained. (paper)

  1. Distributed approximation of Pareto surfaces in multicriteria radiation therapy treatment planning.

    Science.gov (United States)

    Bokrantz, Rasmus

    2013-06-07

    We consider multicriteria radiation therapy treatment planning by navigation over the Pareto surface, implemented by interpolation between discrete treatment plans. Current state of the art for calculation of a discrete representation of the Pareto surface is to sandwich this set between inner and outer approximations that are updated one point at a time. In this paper, we generalize this sequential method to an algorithm that permits parallelization. The principle of the generalization is to apply the sequential method to an approximation of an inexpensive model of the Pareto surface. The information gathered from the model is sub-sequently used for the calculation of points from the exact Pareto surface, which are processed in parallel. The model is constructed according to the current inner and outer approximations, and given a shape that is difficult to approximate, in order to avoid that parts of the Pareto surface are incorrectly disregarded. Approximations of comparable quality to those generated by the sequential method are demonstrated when the degree of parallelization is up to twice the number of dimensions of the objective space. For practical applications, the number of dimensions is typically at least five, so that a speed-up of one order of magnitude is obtained.

  2. Multi-objective optimization approach for air traffic flow management

    Directory of Open Access Journals (Sweden)

    Fadil Rabie

    2017-01-01

    The decision-making stage was then performed with the aid of data clustering techniques to reduce the sizeof the Pareto-optimal set and obtain a smaller representation of the multi-objective design space, there by making it easier for the decision-maker to find satisfactory and meaningful trade-offs, and to select a preferred final design solution.

  3. Spectral-Efficiency - Illumination Pareto Front for Energy Harvesting Enabled VLC System

    KAUST Repository

    Abdelhady, Amr Mohamed Abdelaziz

    2017-12-13

    The continuous improvement in optical energy harvesting devices motivates visible light communication (VLC) system developers to utilize such available free energy sources. An outdoor VLC system is considered where an optical base station sends data to multiple users that are capable of harvesting the optical energy. The proposed VLC system serves multiple users using time division multiple access (TDMA) with unequal time and power allocation, which are allocated to improve the system performance. The adopted optical system provides users with illumination and data communication services. The outdoor optical design objective is to maximize the illumination, while the communication design objective is to maximize the spectral efficiency (SE). The design objectives are shown to be conflicting, therefore, a multiobjective optimization problem is formulated to obtain the Pareto front performance curve for the proposed system. To this end, the marginal optimization problems are solved first using low complexity algorithms. Then, based on the proposed algorithms, a low complexity algorithm is developed to obtain an inner bound of the Pareto front for the illumination-SE tradeoff. The inner bound for the Pareto-front is shown to be close to the optimal Pareto-frontier via several simulation scenarios for different system parameters.

  4. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  5. The Pareto Analysis for Establishing Content Criteria in Surgical Training.

    Science.gov (United States)

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-01-01

    Current surgical training is still highly dependent on expensive operating room (OR) experience. Although there have been many attempts to transfer more training to the skills laboratory, little research is focused on which technical behaviors can lead to the highest profit when they are trained outside the OR. The Pareto principle states that in any population that contributes to a common effect, a few account for the bulk of the effect. This principle has been widely used in business management to increase company profits. This study uses the Pareto principle for establishing content criteria for more efficient surgical training. A retrospective study was conducted to assess verbal guidance provided by 9 supervising surgeons to 12 trainees performing 64 laparoscopic cholecystectomies in the OR. The verbal corrections were documented, tallied, and clustered according to the aimed change in novice behavior. The corrections were rank ordered, and a cumulative distribution curve was used to calculate which corrections accounted for 80% of the total number of verbal corrections. In total, 253 different verbal corrections were uttered 1587 times and were categorized into 40 different clusters of aimed changes in novice behaviors. The 35 highest-ranking verbal corrections (14%) and the 11 highest-ranking clusters (28%) accounted for 80% of the total number of given verbal corrections. Following the Pareto principle, we were able to identify the aspects of trainee behavior that account for most corrections given by supervisors during a laparoscopic cholecystectomy on humans. This strategy can be used for the development of new training programs to prepare the trainee in advance for the challenges encountered in the clinical setting in an OR. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. Optimality Conditions in Differentiable Vector Optimization via Second-Order Tangent Sets

    International Nuclear Information System (INIS)

    Jimenez, Bienvenido; Novo, Vicente

    2004-01-01

    We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditions when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given

  7. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar

    2018-01-01

    , and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using

  8. A New Generalization of the Pareto Distribution and Its Application to Insurance Data

    Directory of Open Access Journals (Sweden)

    Mohamed E. Ghitany

    2018-02-01

    Full Text Available The Pareto classical distribution is one of the most attractive in statistics and particularly in the scenario of actuarial statistics and finance. For example, it is widely used when calculating reinsurance premiums. In the last years, many alternative distributions have been proposed to obtain better adjustments especially when the tail of the empirical distribution of the data is very long. In this work, an alternative generalization of the Pareto distribution is proposed and its properties are studied. Finally, application of the proposed model to the earthquake insurance data set is presented.

  9. Evaluation of Preanalytical Quality Indicators by Six Sigma and Pareto`s Principle.

    Science.gov (United States)

    Kulkarni, Sweta; Ramesh, R; Srinivasan, A R; Silvia, C R Wilma Delphine

    2018-01-01

    Preanalytical steps are the major sources of error in clinical laboratory. The analytical errors can be corrected by quality control procedures but there is a need for stringent quality checks in preanalytical area as these processes are done outside the laboratory. Sigma value depicts the performance of laboratory and its quality measures. Hence in the present study six sigma and Pareto principle was applied to preanalytical quality indicators to evaluate the clinical biochemistry laboratory performance. This observational study was carried out for a period of 1 year from November 2015-2016. A total of 1,44,208 samples and 54,265 test requisition forms were screened for preanalytical errors like missing patient information, sample collection details in forms and hemolysed, lipemic, inappropriate, insufficient samples and total number of errors were calculated and converted into defects per million and sigma scale. Pareto`s chart was drawn using total number of errors and cumulative percentage. In 75% test requisition forms diagnosis was not mentioned and sigma value of 0.9 was obtained and for other errors like sample receiving time, stat and type of sample sigma values were 2.9, 2.6, and 2.8 respectively. For insufficient sample and improper ratio of blood to anticoagulant sigma value was 4.3. Pareto`s chart depicts out of 80% of errors in requisition forms, 20% is contributed by missing information like diagnosis. The development of quality indicators, application of six sigma and Pareto`s principle are quality measures by which not only preanalytical, the total testing process can be improved.

  10. Pareto vs Simmel: residui ed emozioni

    Directory of Open Access Journals (Sweden)

    Silvia Fornari

    2017-08-01

    Full Text Available A cento anni dalla pubblicazione del Trattato di sociologia generale (Pareto 1988 siamo a mantenere vivo ed attuale lo studio paretiano con una rilettura contemporanea del suo pensiero. Ricordato per la grande versatilità intellettuale dagli economisti, rimane lo scienziato rigoroso ed analitico i cui contributi sono ancora discussi a livello internazionale. Noi ne analizzeremo gli aspetti che l’hanno portato ad avvicinarsi all’approccio sociologico, con l’introduzione della nota distinzione dell’azione sociale: logica e non-logica. Una dicotomia utilizzata per dare conto dei cambiamenti sociali riguardanti le modalità d’azione degli uomini e delle donne. Com’è noto le azioni logiche sono quelle che riguardano comportamenti mossi da logicità e raziocinio, in cui vi è una diretta relazione causa-effetto, azioni oggetto di studio degli economisti, e di cui non si occupano i sociologi. Le azioni non-logiche riguardano tutte le tipologie di agire umano che rientrano nel novero delle scienze sociali, e che rappresentano la parte più ampia dell’agire sociale. Sono le azioni guidate dai sentimenti, dall’emotività, dalla superstizione, ecc., illustrate da Pareto nel Trattato di sociologia generale e in saggi successivi, dove riprende anche il concetto di eterogenesi dei fini, formulato per la prima volta da Giambattista Vico. Concetto secondo il quale la storia umana, pur conservando in potenza la realizzazione di certi fini, non è lineare e lungo il suo percorso evolutivo può accadere che l’uomo nel tentativo di raggiungere una finalità arrivi a conclusioni opposte. Pareto collega la definizione del filosofo napoletano alle tipologie di azione sociale e alla loro distinzione (logiche, non-logiche. L’eterogenesi dei fini per Pareto è dunque l’esito di un particolare tipo di azione non-logica dell’essere umano e della collettività.

  11. Sensitivity versus accuracy in multiclass problems using memetic Pareto evolutionary neural networks.

    Science.gov (United States)

    Fernández Caballero, Juan Carlos; Martínez, Francisco José; Hervás, César; Gutiérrez, Pedro Antonio

    2010-05-01

    This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.

  12. Optimizing Distributed Machine Learning for Large Scale EEG Data Set

    Directory of Open Access Journals (Sweden)

    M Bilal Shaikh

    2017-06-01

    Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks   and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.

  13. A multicriteria framework with voxel-dependent parameters for radiotherapy treatment plan optimization

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Uribe-Sanchez, Andres F.; Li, Nan; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To establish a new mathematical framework for radiotherapy treatment optimization with voxel-dependent optimization parameters. Methods: In the treatment plan optimization problem for radiotherapy, a clinically acceptable plan is usually generated by an optimization process with weighting factors or reference doses adjusted for a set of the objective functions associated to the organs. Recent discoveries indicate that adjusting parameters associated with each voxel may lead to better plan quality. However, it is still unclear regarding the mathematical reasons behind it. Furthermore, questions about the objective function selection and parameter adjustment to assure Pareto optimality as well as the relationship between the optimal solutions obtained from the organ-based and voxel-based models remain unanswered. To answer these questions, the authors establish in this work a new mathematical framework equipped with two theorems. Results: The new framework clarifies the different consequences of adjusting organ-dependent and voxel-dependent parameters for the treatment plan optimization of radiation therapy, as well as the impact of using different objective functions on plan qualities and Pareto surfaces. The main discoveries are threefold: (1) While in the organ-based model the selection of the objective function has an impact on the quality of the optimized plans, this is no longer an issue for the voxel-based model since the Pareto surface is independent of the objective function selection and the entire Pareto surface could be generated as long as the objective function satisfies certain mathematical conditions; (2) All Pareto solutions generated by the organ-based model with different objective functions are parts of a unique Pareto surface generated by the voxel-based model with any appropriate objective function; (3) A much larger Pareto surface is explored by adjusting voxel-dependent parameters than by adjusting organ-dependent parameters, possibly

  14. Towards a seascape typology. I. Zipf versus Pareto laws

    Science.gov (United States)

    Seuront, Laurent; Mitchell, James G.

    Two data analysis methods, referred to as the Zipf and Pareto methods, initially introduced in economics and linguistics two centuries ago and subsequently used in a wide range of fields (word frequency in languages and literature, human demographics, finance, city formation, genomics and physics), are described and proposed here as a potential tool to classify space-time patterns in marine ecology. The aim of this paper is, first, to present the theoretical bases of Zipf and Pareto laws, and to demonstrate that they are strictly equivalent. In that way, we provide a one-to-one correspondence between their characteristic exponents and argue that the choice of technique is a matter of convenience. Second, we argue that the appeal of this technique is that it is assumption-free for the distribution of the data and regularity of sampling interval, as well as being extremely easy to implement. Finally, in order to allow marine ecologists to identify and classify any structure in their data sets, we provide a step by step overview of the characteristic shapes expected for Zipf's law for the cases of randomness, power law behavior, power law behavior contaminated by internal and external noise, and competing power laws illustrated on the basis of typical ecological situations such as mixing processes involving non-interacting and interacting species, phytoplankton growth processes and differential grazing by zooplankton.

  15. Topology optimization problems with design-dependent sets of constraints

    DEFF Research Database (Denmark)

    Schou, Marie-Louise Højlund

    Topology optimization is a design tool which is used in numerous fields. It can be used whenever the design is driven by weight and strength considerations. The basic concept of topology optimization is the interpretation of partial differential equation coefficients as effective material...... properties and designing through changing these coefficients. For example, consider a continuous structure. Then the basic concept is to represent this structure by small pieces of material that are coinciding with the elements of a finite element model of the structure. This thesis treats stress constrained...... structural topology optimization problems. For such problems a stress constraint for an element should only be present in the optimization problem when the structural design variable corresponding to this element has a value greater than zero. We model the stress constrained topology optimization problem...

  16. Multi-objective optimization of a series–parallel system using GPSIA

    International Nuclear Information System (INIS)

    Okafor, Ekene Gabriel; Sun Youchao

    2012-01-01

    The optimal solution of a multi-objective optimization problem (MOP) corresponds to a Pareto set that is characterized by a tradeoff between objectives. Genetic Pareto Set Identification Algorithm (GPSIA) proposed for reliability-redundant MOPs is a hybrid technique which combines genetic and heuristic principles to generate non-dominated solutions. Series–parallel system with active redundancy is studied in this paper. Reliability and cost were the research objective functions subject to cost and weight constraints. The results reveal an evenly distributed non-dominated front. The distances between successive Pareto points were used to evaluate the general performance of the method. Plots were also used to show the computational results for the type of system studied and the robustness of the technique is discussed in comparison with NSGA-II and SPEA-2.

  17. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  18. Simulation-based robust optimization for signal timing and setting.

    Science.gov (United States)

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  19. Setting of the Optimal Parameters of Melted Glass

    Czech Academy of Sciences Publication Activity Database

    Luptáková, Natália; Matejíčka, L.; Krečmer, N.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 73-79 ISSN 1802-2308 Institutional support: RVO:68081723 Keywords : Striae * Glass * Glass melting * Regression * Optimal parameters Subject RIV: JH - Ceramics, Fire-Resistant Materials and Glass

  20. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic

    2018-03-18

    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback–Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback–Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.

  1. Pareto analysis of critical factors affecting technical institution evaluation

    Directory of Open Access Journals (Sweden)

    Victor Gambhir

    2012-08-01

    Full Text Available With the change of education policy in 1991, more and more technical institutions are being set up in India. Some of these institutions provide quality education, but others are merely concentrating on quantity. These stakeholders are in a state of confusion about decision to select the best institute for their higher educational studies. Although various agencies including print media provide ranking of these institutions every year, but their results are controversial and biased. In this paper, the authors have made an endeavor to find the critical factors for technical institution evaluation from literature survey. A Pareto analysis has also been performed to find the intensity of these critical factors in evaluation. This will not only help the stake holders in taking right decisions but will also help the management of institutions in benchmarking for identifying the most important critical areas to improve the existing system. This will in turn help Indian economy.

  2. Tapped density optimisation for four agricultural wastes - Part II: Performance analysis and Taguchi-Pareto

    Directory of Open Access Journals (Sweden)

    Ajibade Oluwaseyi Ayodele

    2016-01-01

    Full Text Available In this attempt, which is a second part of discussions on tapped density optimisation for four agricultural wastes (particles of coconut, periwinkle, palm kernel and egg shells, performance analysis for comparative basis is made. This paper pioneers a study direction in which optimisation of process variables are pursued using Taguchi method integrated with the Pareto 80-20 rule. Negative percentage improvements resulted when the optimal tapped density was compared with the average tapped density. However, the performance analysis between optimal tapped density and the peak tapped density values yielded positive percentage improvements for the four filler particles. The performance analysis results validate the effectiveness of using the Taguchi method in improving the tapped density properties of the filler particles. The application of the Pareto 80-20 rule to the table of parameters and levels produced revised tables of parameters and levels which helped to identify the factor-levels position of each parameter that is economical to optimality. The Pareto 80-20 rule also produced revised S/N response tables which were used to know the relevant S/N ratios that are relevant to optimality.

  3. An Evolutionary Multi-objective Approach for Speed Tuning Optimization with Energy Saving in Railway Management

    OpenAIRE

    Chevrier , Rémy

    2010-01-01

    International audience; An approach for speed tuning in railway management is presented for optimizing both travel duration and energy saving. This approach is based on a state-of-the-art evolutionary algorithm with Pareto approach. This algorithm provides a set of diversified non-dominated solutions to the decision-maker. A case study on Gonesse connection (France) is also reported and analyzed.

  4. A clinical distance measure for evaluating treatment plan quality difference with Pareto fronts in radiotherapy

    Directory of Open Access Journals (Sweden)

    Kristoffer Petersson

    2017-07-01

    Full Text Available We present a clinical distance measure for Pareto front evaluation studies in radiotherapy, which we show strongly correlates (r = 0.74 and 0.90 with clinical plan quality evaluation. For five prostate cases, sub-optimal treatment plans located at a clinical distance value of >0.32 (0.28–0.35 from fronts of Pareto optimal plans, were assessed to be of lower plan quality by our (12 observers (p < .05. In conclusion, the clinical distance measure can be used to determine if the difference between a front and a given plan (or between different fronts corresponds to a clinically significant plan quality difference.

  5. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    Science.gov (United States)

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  6. An Investigation of the Pareto Distribution as a Model for High Grazing Angle Clutter

    Science.gov (United States)

    2011-03-01

    radar detection schemes under controlled conditions. Complicated clutter models result in mathematical difficulties in the determination of optimal and...a population [7]. It has been used in the modelling of actuarial data; an example is in excess of loss quotations in insurance [8]. Its usefulness as...UNCLASSIFIED modified Bessel functions, making it difficult to employ in radar detection schemes. The Pareto Distribution is amenable to mathematical

  7. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    Science.gov (United States)

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  8. Depression screening optimization in an academic rural setting.

    Science.gov (United States)

    Aleem, Sohaib; Torrey, William C; Duncan, Mathew S; Hort, Shoshana J; Mecchella, John N

    2015-01-01

    Primary care plays a critical role in screening and management of depression. The purpose of this paper is to focus on leveraging the electronic health record (EHR) as well as work flow redesign to improve the efficiency and reliability of the process of depression screening in two adult primary care clinics of a rural academic institution in USA. The authors utilized various process improvement tools from lean six sigma methodology including project charter, swim lane process maps, critical to quality tree, process control charts, fishbone diagrams, frequency impact matrix, mistake proofing and monitoring plan in Define-Measure-Analyze-Improve-Control format. Interventions included change in depression screening tool, optimization of data entry in EHR. EHR data entry optimization; follow up of positive screen, staff training and EHR redesign. Depression screening rate for office-based primary care visits improved from 17.0 percent at baseline to 75.9 percent in the post-intervention control phase (p<0.001). Follow up of positive depression screen with Patient History Questionnaire-9 data collection remained above 90 percent. Duplication of depression screening increased from 0.6 percent initially to 11.7 percent and then decreased to 4.7 percent after optimization of data entry by patients and flow staff. Impact of interventions on clinical outcomes could not be evaluated. Successful implementation, sustainability and revision of a process improvement initiative to facilitate screening, follow up and management of depression in primary care requires accounting for voice of the process (performance metrics), system limitations and voice of the customer (staff and patients) to overcome various system, customer and human resource constraints.

  9. Pareto Improving Price Regulation when the Asset Market is Incomplete

    NARCIS (Netherlands)

    Herings, P.J.J.; Polemarchakis, H.M.

    1999-01-01

    When the asset market is incomplete, competitive equilibria are constrained suboptimal, which provides a scope for pareto improving interventions. Price regulation can be such a pareto improving policy, even when the welfare effects of rationing are taken into account. An appealing aspect of price

  10. Pareto 80/20 Law: Derivation via Random Partitioning

    Science.gov (United States)

    Lipovetsky, Stan

    2009-01-01

    The Pareto 80/20 Rule, also known as the Pareto principle or law, states that a small number of causes (20%) is responsible for a large percentage (80%) of the effect. Although widely recognized as a heuristic rule, this proportion has not been theoretically based. The article considers derivation of this 80/20 rule and some other standard…

  11. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  12. Optimization models using fuzzy sets and possibility theory

    CERN Document Server

    Orlovski, S

    1987-01-01

    Optimization is of central concern to a number of discip­ lines. Operations Research and Decision Theory are often consi­ dered to be identical with optimizationo But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i. e. the solutions were considered to be either fea­ sible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeller to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real prob­ lems. This is particularly true if the problem under considera­ tion includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if na­ tural language has to be...

  13. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  14. Perturbing engine performance measurements to determine optimal engine control settings

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-12-30

    Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initial value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.

  15. Ranking of microRNA target prediction scores by Pareto front analysis.

    Science.gov (United States)

    Sahoo, Sudhakar; Albrecht, Andreas A

    2010-12-01

    Over the past ten years, a variety of microRNA target prediction methods has been developed, and many of the methods are constantly improved and adapted to recent insights into miRNA-mRNA interactions. In a typical scenario, different methods return different rankings of putative targets, even if the ranking is reduced to selected mRNAs that are related to a specific disease or cell type. For the experimental validation it is then difficult to decide in which order to process the predicted miRNA-mRNA bindings, since each validation is a laborious task and therefore only a limited number of mRNAs can be analysed. We propose a new ranking scheme that combines ranked predictions from several methods and - unlike standard thresholding methods - utilises the concept of Pareto fronts as defined in multi-objective optimisation. In the present study, we attempt a proof of concept by applying the new ranking scheme to hsa-miR-21, hsa-miR-125b, and hsa-miR-373 and prediction scores supplied by PITA and RNAhybrid. The scores are interpreted as a two-objective optimisation problem, and the elements of the Pareto front are ranked by the STarMir score with a subsequent re-calculation of the Pareto front after removal of the top-ranked mRNA from the basic set of prediction scores. The method is evaluated on validated targets of the three miRNA, and the ranking is compared to scores from DIANA-microT and TargetScan. We observed that the new ranking method performs well and consistent, and the first validated targets are elements of Pareto fronts at a relatively early stage of the recurrent procedure, which encourages further research towards a higher-dimensional analysis of Pareto fronts. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. A set of rules for constructing an admissible set of D optimal exact ...

    African Journals Online (AJOL)

    In the search for a D-optimal exact design using the combinatorial iterative technique introduced by Onukogu and Iwundu, 2008, all the support points that make up the experimental region are grouped into H concentric balls according to their distances from the centre. Any selection of N support points from the balls defines ...

  17. Multi-objective optimization of a continuous bio-dissimilation process of glycerol to 1, 3-propanediol.

    Science.gov (United States)

    Xu, Gongxian; Liu, Ying; Gao, Qunwang

    2016-02-10

    This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  19. Cameras and settings for optimal image capture from UAVs

    Science.gov (United States)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  20. Energy optimized Gaussian basis sets for the atoms T1 - Rn

    International Nuclear Information System (INIS)

    Faegri, K. Jr.

    1987-01-01

    Energy optimized Gaussian basis sets have been derived for the atoms Tl-Rn. Two sets are presented - a (20,16,10,6) set and a (22,17,13,8) set. The smallest sets yield atomic energies 107 to 123 mH above the numerical Hartree-Fock values, while the larger sets give energies 11 mH above the numerical results. Energy trends from the smaller sets indicate that reduced shielding by p-electrons may place a greater demand on the flexibility of d- and f-orbital description for the lighter elements of the series

  1. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  2. Multi-objective optimization of a joule cycle for re-liquefaction of the Liquefied Natural Gas

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaelahi, M.

    2011-01-01

    Highlights: → A typical LNG boil off gas re-liquefaction plant system is optimized. → Objective functions based on thermodynamic and thermoeconomic analysis are obtained. → The cost of the system product and the exergetic efficiency are optimized, simultaneously. → A decision-making process for selection of the final optimal design is introduced. → Results obtained using various optimization scenarios are compared and discussed. - Abstract: A LNG re-liquefaction plant is optimized with a multi-objective approach which simultaneously considers exergetic and exergoeconomic objectives. In this regard, optimization is performed in order to maximize the exergetic efficiency of plant and minimize the unit cost of the system product (refrigeration effect), simultaneously. Thermodynamic modeling is performed based on energy and exergy analyses, while an exergoeconomic model based on the total revenue requirement (TRR) are developed. Optimization programming in MATLAB is performed using one of the most powerful and robust multi-objective optimization algorithms namely NSGA-II. This approach which is based on the Genetic Algorithm is applied to find a set of Pareto optimal solutions. Pareto optimal frontier is obtained and a final optimal solution is selected in a decision-making process. An example of decision-making process for selection of the final solution from the available optimal points of the Pareto frontier is presented here. The feature of selected final optimal system is compared with corresponding features of the base case and exergoeconomic single-objective optimized systems and discussed.

  3. Coordinated Pitch & Torque Control of Large-Scale Wind Turbine Based on Pareto Eciency Analysis

    DEFF Research Database (Denmark)

    Lin, Zhongwei; Chen, Zhenyu; Wu, Qiuwei

    2018-01-01

    For the existing pitch and torque control of the wind turbine generator system (WTGS), further development on coordinated control is necessary to improve effectiveness for practical applications. In this paper, the WTGS is modeled as a coupling combination of two subsystems: the generator torque...... control subsystem and blade pitch control subsystem. Then, the pole positions in each control subsystem are adjusted coordinately to evaluate the controller participation and used as the objective of optimization. A two-level parameters-controllers coordinated optimization scheme is proposed and applied...... to optimize the controller coordination based on the Pareto optimization theory. Three solutions are obtained through optimization, which includes the optimal torque solution, optimal power solution, and satisfactory solution. Detailed comparisons evaluate the performance of the three selected solutions...

  4. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  5. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  6. A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization

    OpenAIRE

    Suguna, N.; Thanushkodi, K.

    2010-01-01

    Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...

  7. Applying Pareto multi-criteria decision making in concurrent engineering: A case study of polyethylene industry

    Directory of Open Access Journals (Sweden)

    Akbar A. Tabriz

    2011-07-01

    Full Text Available Concurrent engineering (CE is one of the widest known techniques for simultaneous planning of product and process design. In concurrent engineering, design processes are often complicated with multiple conflicting criteria and discrete sets of feasible alternatives. Thus multi-criteria decision making (MCDM techniques are integrated into CE to perform concurrent design. This paper proposes a design framework governed by MCDM technique, which are in conflict in the sense of competing for common resources to achieve variously different performance objectives such as financial, functional, environmental, etc. The Pareto MCDM model is applied to polyethylene pipe concurrent design governed by four criteria to determine the best alternative design to Pareto-compromise design.

  8. Setting the optimal type of equipment to be adopted and the optimal time to replace it

    OpenAIRE

    Albici, Mihaela

    2009-01-01

    The mathematical models of equipment’s wear and tear, and replacement theory aim at deciding on the purchase selection of a certain equipment type, the optimal exploitation time of the equipment, the time and ways to replace or repair it, or to ensure its spare parts, the equipment’s performance in the technical progress context, the opportunities to modernize it etc.

  9. The geometry of the Pareto front in biological phenotype space

    Science.gov (United States)

    Sheftel, Hila; Shoval, Oren; Mayo, Avi; Alon, Uri

    2013-01-01

    When organisms perform a single task, selection leads to phenotypes that maximize performance at that task. When organisms need to perform multiple tasks, a trade-off arises because no phenotype can optimize all tasks. Recent work addressed this question, and assumed that the performance at each task decays with distance in trait space from the best phenotype at that task. Under this assumption, the best-fitness solutions (termed the Pareto front) lie on simple low-dimensional shapes in trait space: line segments, triangles and other polygons. The vertices of these polygons are specialists at a single task. Here, we generalize this finding, by considering performance functions of general form, not necessarily functions that decay monotonically with distance from their peak. We find that, except for performance functions with highly eccentric contours, simple shapes in phenotype space are still found, but with mildly curving edges instead of straight ones. In a wide range of systems, complex data on multiple quantitative traits, which might be expected to fill a high-dimensional phenotype space, is predicted instead to collapse onto low-dimensional shapes; phenotypes near the vertices of these shapes are predicted to be specialists, and can thus suggest which tasks may be at play. PMID:23789060

  10. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  11. Classification as clustering: a Pareto cooperative-competitive GP approach.

    Science.gov (United States)

    McIntyre, Andrew R; Heywood, Malcolm I

    2011-01-01

    Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.

  12. Kinetics of wealth and the Pareto law.

    Science.gov (United States)

    Boghosian, Bruce M

    2014-04-01

    An important class of economic models involve agents whose wealth changes due to transactions with other agents. Several authors have pointed out an analogy with kinetic theory, which describes molecules whose momentum and energy change due to interactions with other molecules. We pursue this analogy and derive a Boltzmann equation for the time evolution of the wealth distribution of a population of agents for the so-called Yard-Sale Model of wealth exchange. We examine the solutions to this equation by a combination of analytical and numerical methods and investigate its long-time limit. We study an important limit of this equation for small transaction sizes and derive a partial integrodifferential equation governing the evolution of the wealth distribution in a closed economy. We then describe how this model can be extended to include features such as inflation, production, and taxation. In particular, we show that the model with taxation exhibits the basic features of the Pareto law, namely, a lower cutoff to the wealth density at small values of wealth, and approximate power-law behavior at large values of wealth.

  13. Level Set-Based Topology Optimization for the Design of an Electromagnetic Cloak With Ferrite Material

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Andkjær, Jacob Anders

    2013-01-01

    . A level set-based topology optimization method incorporating a fictitious interface energy is used to find optimized configurations of the ferrite material. The numerical results demonstrate that the optimization successfully found an appropriate ferrite configuration that functions as an electromagnetic......This paper presents a structural optimization method for the design of an electromagnetic cloak made of ferrite material. Ferrite materials exhibit a frequency-dependent degree of permeability, due to a magnetic resonance phenomenon that can be altered by changing the magnitude of an externally...

  14. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  15. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  16. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  17. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  18. Optimal Interest-Rate Setting in a Dynamic IS/AS Model

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2011-01-01

    This note deals with interest-rate setting in a simple dynamic macroeconomic setting. The purpose is to present some basic and central properties of an optimal interest-rate rule. The model framework predates the New-Keynesian paradigm of the late 1990s and onwards (it is accordingly dubbed “Old...

  19. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  20. Utilization of reduced fuelling ripple set in ROP detector layout optimization

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an ROP detect layout optimization algorithm in CANDU reactors. ► The effect of using reduced set of fuelling ripples in ADORE is assessed. ► Significant speedup can be realized by adopting this approach. ► The quality of the results is comparable to results from full set of ripples. - Abstract: The ADORE (Alternative Detector layout Optimization for REgional overpower protection system) algorithm for performing the optimization of regional overpower protection (ROP) for CANDU® reactors has been recently developed. This algorithm utilizes the simulated annealing (SA) stochastic optimization technique to come up with an optimized detector layout for the ROP systems. For each history in the SA iteration where a particular detector layout is evaluated, the goodness of this detector layout is measured in terms of its trip set point value which is obtained by performing a probabilistic trip set point calculation using the ROVER-F code. Since during each optimization execution thousands of candidate detector layouts are evaluated, the overall optimization process is time consuming. Since for each ROVER-F evaluation the number of fuelling ripples controls the execution time, reducing the number of fuelling ripples will reduce the overall execution time. This approach has been investigated and the results are presented in this paper. The challenge is to construct a set of representative fuelling ripples which will significantly speedup the optimization process while guaranteeing that the resulting detector layout has similar quality to the ones produced when the complete set of fuelling ripples is employed.

  1. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  2. Application of the Pareto principle to identify and address drug-therapy safety issues.

    Science.gov (United States)

    Müller, Fabian; Dormann, Harald; Pfistermeister, Barbara; Sonst, Anja; Patapovas, Andrius; Vogler, Renate; Hartmann, Nina; Plank-Kiegele, Bettina; Kirchner, Melanie; Bürkle, Thomas; Maas, Renke

    2014-06-01

    Adverse drug events (ADE) and medication errors (ME) are common causes of morbidity in patients presenting at emergency departments (ED). Recognition of ADE as being drug related and prevention of ME are key to enhancing pharmacotherapy safety in ED. We assessed the applicability of the Pareto principle (~80 % of effects result from 20 % of causes) to address locally relevant problems of drug therapy. In 752 cases consecutively admitted to the nontraumatic ED of a major regional hospital, ADE, ME, contributing drugs, preventability, and detection rates of ADE by ED staff were investigated. Symptoms, errors, and drugs were sorted by frequency in order to apply the Pareto principle. In total, 242 ADE were observed, and 148 (61.2 %) were assessed as preventable. ADE contributed to 110 inpatient hospitalizations. The ten most frequent symptoms were causally involved in 88 (80.0 %) inpatient hospitalizations. Only 45 (18.6 %) ADE were recognized as drug-related problems until discharge from the ED. A limited set of 33 drugs accounted for 184 (76.0 %) ADE; ME contributed to 57 ADE. Frequency-based listing of ADE, ME, and drugs involved allowed identification of the most relevant problems and development of easily to implement safety measures, such as wall and pocket charts. The Pareto principle provides a method for identifying the locally most relevant ADE, ME, and involved drugs. This permits subsequent development of interventions to increase patient safety in the ED admission process that best suit local needs.

  3. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  4. Multiswarm comprehensive learning particle swarm optimization for solving multiobjective optimization problems.

    Science.gov (United States)

    Yu, Xiang; Zhang, Xueqing

    2017-01-01

    Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.

  5. Optimal testing input sets for reduced diagnosis time of nuclear power plant digital electronic circuits

    International Nuclear Information System (INIS)

    Kim, D.S.; Seong, P.H.

    1994-01-01

    This paper describes the optimal testing input sets required for the fault diagnosis of the nuclear power plant digital electronic circuits. With the complicated systems such as very large scale integration (VLSI), nuclear power plant (NPP), and aircraft, testing is the major factor of the maintenance of the system. Particularly, diagnosis time grows quickly with the complexity of the component. In this research, for reduce diagnosis time the authors derived the optimal testing sets that are the minimal testing sets required for detecting the failure and for locating of the failed component. For reduced diagnosis time, the technique presented by Hayes fits best for the approach to testing sets generation among many conventional methods. However, this method has the following disadvantages: (a) it considers only the simple network (b) it concerns only whether the system is in failed state or not and does not provide the way to locate the failed component. Therefore the authors have derived the optimal testing input sets that resolve these problems by Hayes while preserving its advantages. When they applied the optimal testing sets to the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, they found that the fault diagnosis using the optimal testing sets makes testing the digital electronic circuits much faster than that using exhaustive testing input sets; when they applied them to test the Universal (UV) Card which is a nuclear power plant digital input/output solid state protection system card, they reduced the testing time up to about 100 times

  6. OPTIMIZATION OF AGGREGATION AND SEQUENTIAL-PARALLEL EXECUTION MODES OF INTERSECTING OPERATION SETS

    Directory of Open Access Journals (Sweden)

    G. М. Levin

    2016-01-01

    Full Text Available A mathematical model and a method for the problem of optimization of aggregation and of sequential- parallel execution modes of intersecting operation sets are proposed. The proposed method is based on the two-level decomposition scheme. At the top level the variant of aggregation for groups of operations is selected, and at the lower level the execution modes of operations are optimized for a fixed version of aggregation.

  7. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  8. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  9. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  10. An Improved Particle Swarm Optimization for Solving Bilevel Multiobjective Programming Problem

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2012-01-01

    Full Text Available An improved particle swarm optimization (PSO algorithm is proposed for solving bilevel multiobjective programming problem (BLMPP. For such problems, the proposed algorithm directly simulates the decision process of bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. Finally, some numerical examples are given to illustrate the feasibility of the proposed algorithm.

  11. AN APPLICATION OF MULTICRITERIA OPTIMIZATION TO THE TWO-CARRIER TWO-SPEED PLANETARY GEAR TRAINS

    Directory of Open Access Journals (Sweden)

    Jelena Stefanović-Marinović

    2017-04-01

    Full Text Available The objective of this study is the application of multi-criteria optimization to the two-carrier two-speed planetary gear trains. In order to determine mathematical model of multi-criteria optimization, variables, objective functions and conditions should be determined. The subject of the paper is two-carrier two-speed planetary gears with brakes on single shafts. Apart from the determination of the set of the Pareto optimal solutions, the weighted coefficient method for choosing an optimal solution from this set is also included in the mathematical model.

  12. Sensitivity of the optimal parameter settings for a LTE packet scheduler

    NARCIS (Netherlands)

    Fernandez-Diaz, I.; Litjens, R.; van den Berg, C.A.; Dimitrova, D.C.; Spaey, K.

    Advanced packet scheduling schemes in 3G/3G+ mobile networks provide one or more parameters to optimise the trade-off between QoS and resource efficiency. In this paper we study the sensitivity of the optimal parameter setting for packet scheduling in LTE radio networks with respect to various

  13. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  14. Ventilation area measured with eit in order to optimize peep settings in mechanically ventilated patients

    NARCIS (Netherlands)

    Blankman, P; Groot Jebbink, E; Preis, C; Bikker, I.; Gommers, D.

    2012-01-01

    INTRODUCTION. Electrical Impedance Tomography (EIT) is a non-invasive imaging technique, which can be used to visualize ventilation. Ventilation will be measured by impedance changes due to ventilation. OBJECTIVES. The aim of this study was to optimize PEEP settings based on the ventilation area of

  15. Internal combustion engine report: Spark ignited ICE GenSet optimization and novel concept development

    Energy Technology Data Exchange (ETDEWEB)

    Keller, J.; Blarigan, P. Van [Sandia National Labs., Livermore, CA (United States)

    1998-08-01

    In this manuscript the authors report on two projects each of which the goal is to produce cost effective hydrogen utilization technologies. These projects are: (1) the development of an electrical generation system using a conventional four-stroke spark-ignited internal combustion engine generator combination (SI-GenSet) optimized for maximum efficiency and minimum emissions, and (2) the development of a novel internal combustion engine concept. The SI-GenSet will be optimized to run on either hydrogen or hydrogen-blends. The novel concept seeks to develop an engine that optimizes the Otto cycle in a free piston configuration while minimizing all emissions. To this end the authors are developing a rapid combustion homogeneous charge compression ignition (HCCI) engine using a linear alternator for both power take-off and engine control. Targeted applications include stationary electrical power generation, stationary shaft power generation, hybrid vehicles, and nearly any other application now being accomplished with internal combustion engines.

  16. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  17. A Pareto upper tail for capital income distribution

    Science.gov (United States)

    Oancea, Bogdan; Pirjol, Dan; Andrei, Tudorel

    2018-02-01

    We present a study of the capital income distribution and of its contribution to the total income (capital income share) using individual tax income data in Romania, for 2013 and 2014. Using a parametric representation we show that the capital income is Pareto distributed in the upper tail, with a Pareto coefficient α ∼ 1 . 44 which is much smaller than the corresponding coefficient for wage- and non-wage-income (excluding capital income), of α ∼ 2 . 53. Including the capital income contribution has the effect of increasing the overall inequality measures.

  18. Designing Pareto-superior demand-response rate options

    International Nuclear Information System (INIS)

    Horowitz, I.; Woo, C.K.

    2006-01-01

    We explore three voluntary service options-real-time pricing, time-of-use pricing, and curtailable/interruptible service-that a local distribution company might offer its customers in order to encourage them to alter their electricity usage in response to changes in the electricity-spot-market price. These options are simple and practical, and make minimal information demands. We show that each of the options is Pareto-superior ex ante, in that it benefits both the participants and the company offering it, while not affecting the non-participants. The options are shown to be Pareto-superior ex post as well, except under certain exceptional circumstances. (author)

  19. Pareto-Zipf law in growing systems with multiplicative interactions

    Science.gov (United States)

    Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi

    2018-06-01

    Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.

  20. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    Science.gov (United States)

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Synthetic optimization of air turbine for dental handpieces.

    Science.gov (United States)

    Shi, Z Y; Dong, T

    2014-01-01

    A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.

  2. The equivalence of multi-criteria methods for radiotherapy plan optimization

    International Nuclear Information System (INIS)

    Breedveld, Sebastiaan; Storchi, Pascal R M; Heijmen, Ben J M

    2009-01-01

    Several methods can be used to achieve multi-criteria optimization of radiation therapy treatment planning, which strive for Pareto-optimality. The property of the solution being Pareto optimal is desired, because it guarantees that no criteria can be improved without deteriorating another criteria. The most widely used methods are the weighted-sum method, in which the different treatment objectives are weighted, and constrained optimization methods, in which treatment goals are set and the algorithm has to find the best plan fulfilling these goals. The constrained method used in this paper, the 2pεc (2-phase ε-constraint) method is based on the ε-constraint method, which generates Pareto-optimal solutions. Both approaches are uniquely related to each other. In this paper, we will show that it is possible to switch from the constrained method to the weighted-sum method by using the Lagrange multipliers from the constrained optimization problem, and vice versa by setting the appropriate constraints. In general, the theory presented in this paper can be useful in cases where a new situation is slightly different from the original situation, e.g. in online treatment planning, with deformations of the volumes of interest, or in automated treatment planning, where changes to the automated plan have to be made. An example of the latter is given where the planner is not satisfied with the result from the constrained method and wishes to decrease the dose in a structure. By using the Lagrange multipliers, a weighted-sum optimization problem is constructed, which generates a Pareto-optimal solution in the neighbourhood of the original plan, but fulfills the new treatment objectives.

  3. Trace element analysis in an optimized set-up for total reflection PIXE (TPIXE)

    International Nuclear Information System (INIS)

    Van Kan, J.A.; Vis, R.D.

    1996-01-01

    A newly constructed chamber for measuring with MeV proton beams at small incidence angles (0 to 35 mrad) is used to analyse trace elements on flat surfaces such as Si wafers, quartz substrates and perspex. This set-up is constructed in such a way that the X-ray detector can reach very large solid angles, larger than 1 sr. Using these large solid angles in combination with the reduction of bremsstrahlungs background, lower limits of detection (LOD) using TPIXE can be obtained as compared with PIXE in the conventional geometry. Standard solutions are used to determine the LODs obtainable with TPIXE in the optimized set-up. These solutions contain traces of As and Sr with concentrations down to 20 ppb in an insulin solution. The limits of detection found are compared with earlier ones obtained with TPIXE in a non optimized set-up and with TXRF results. (author)

  4. Pareto Distribution of Firm Size and Knowledge Spillover Process as a Network

    OpenAIRE

    Tomohiko Konno

    2013-01-01

    The firm size distribution is considered as Pareto distribution. In the present paper, we show that the Pareto distribution of firm size results from the spillover network model which was introduced in Konno (2010).

  5. A practical approach for solving multi-objective reliability redundancy allocation problems using extended bare-bones particle swarm optimization

    International Nuclear Information System (INIS)

    Zhang, Enze; Wu, Yifei; Chen, Qingwei

    2014-01-01

    This paper proposes a practical approach, combining bare-bones particle swarm optimization and sensitivity-based clustering for solving multi-objective reliability redundancy allocation problems (RAPs). A two-stage process is performed to identify promising solutions. Specifically, a new bare-bones multi-objective particle swarm optimization algorithm (BBMOPSO) is developed and applied in the first stage to identify a Pareto-optimal set. This algorithm mainly differs from other multi-objective particle swarm optimization algorithms in the parameter-free particle updating strategy, which is especially suitable for handling the complexity and nonlinearity of RAPs. Moreover, by utilizing an approach based on the adaptive grid to update the global particle leaders, a mutation operator to improve the exploration ability and an effective constraint handling strategy, the integrated BBMOPSO algorithm can generate excellent approximation of the true Pareto-optimal front for RAPs. This is followed by a data clustering technique based on difference sensitivity in the second stage to prune the obtained Pareto-optimal set and obtain a small, workable sized set of promising solutions for system implementation. Two illustrative examples are presented to show the feasibility and effectiveness of the proposed approach

  6. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  7. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  8. A Pareto scale-inflated outlier model and its Bayesian analysis

    OpenAIRE

    Scollnik, David P. M.

    2016-01-01

    This paper develops a Pareto scale-inflated outlier model. This model is intended for use when data from some standard Pareto distribution of interest is suspected to have been contaminated with a relatively small number of outliers from a Pareto distribution with the same shape parameter but with an inflated scale parameter. The Bayesian analysis of this Pareto scale-inflated outlier model is considered and its implementation using the Gibbs sampler is discussed. The paper contains three wor...

  9. Optimal harvesting for a predator-prey agent-based model using difference equations.

    Science.gov (United States)

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  10. Security Optimization for Distributed Applications Oriented on Very Large Data Sets

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2010-01-01

    Full Text Available The paper presents the main characteristics of applications which are working with very large data sets and the issues related to security. First section addresses the optimization process and how it is approached when dealing with security. The second section describes the concept of very large datasets management while in the third section the risks related are identified and classified. Finally, a security optimization schema is presented with a cost-efficiency analysis upon its feasibility. Conclusions are drawn and future approaches are identified.

  11. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  12. Optimized positioning of autonomous surgical lamps

    Science.gov (United States)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  13. Multi-agent Pareto appointment exchanging in hospital patient scheduling

    NARCIS (Netherlands)

    I.B. Vermeulen (Ivan); S.M. Bohte (Sander); D.J.A. Somefun (Koye); J.A. La Poutré (Han)

    2007-01-01

    htmlabstractWe present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment

  14. Multi-agent Pareto appointment exchanging in hospital patient scheduling

    NARCIS (Netherlands)

    Vermeulen, I.B.; Bohté, S.M.; Somefun, D.J.A.; Poutré, La J.A.

    2007-01-01

    We present a dynamic and distributed approach to the hospital patient scheduling problem, in which patients can have multiple appointments that have to be scheduled to different resources. To efficiently solve this problem we develop a multi-agent Pareto-improvement appointment exchanging algorithm:

  15. Word frequencies: A comparison of Pareto type distributions

    Science.gov (United States)

    Wiegand, Martin; Nadarajah, Saralees; Si, Yuancheng

    2018-03-01

    Mehri and Jamaati (2017) [18] used Zipf's law to model word frequencies in Holy Bible translations for one hundred live languages. We compare the fit of Zipf's law to a number of Pareto type distributions. The latter distributions are shown to provide the best fit, as judged by a number of comparative plots and error measures. The fit of Zipf's law appears generally poor.

  16. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.

  17. An Evolutionary Efficiency Alternative to the Notion of Pareto Efficiency

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractThe paper argues that the notion of Pareto efficiency builds on two normative assumptions: the more general consequentialist norm of any efficiency criterion, and the strong no-harm principle of the prohibition of any redistribution during the economic process that hurts at least one

  18. Meta-Modeling by Symbolic Regression and Pareto Simulated Annealing

    NARCIS (Netherlands)

    Stinstra, E.; Rennen, G.; Teeuwen, G.J.A.

    2006-01-01

    The subject of this paper is a new approach to Symbolic Regression.Other publications on Symbolic Regression use Genetic Programming.This paper describes an alternative method based on Pareto Simulated Annealing.Our method is based on linear regression for the estimation of constants.Interval

  19. Tsallis-Pareto like distributions in hadron-hadron collisions

    International Nuclear Information System (INIS)

    Barnafoeldi, G G; Uermoessy, K; Biro, T S

    2011-01-01

    Non-extensive thermodynamics is a novel approach in high energy physics. In high-energy heavy-ion, and especially in proton-proton collisions we are far from a canonical thermal state, described by the Boltzmann-Gibbs statistic. In these reactions low and intermediate transverse momentum spectra are extremely well reproduced by the Tsallis-Pareto distribution, but the physical origin of Tsallis parameters is still an unsettled question. Here, we analyze whether Tsallis-Pareto energy distribution do overlap with hadron spectra at high-pT. We fitted data, measured in proton-proton (proton-antiproton) collisions in wide center of mass energy range from 200 GeV RHIC up to 7 TeV LHC energies. Furthermore, our test is extended to an investigation of a possible √s-dependence of the power in the Tsallis-Pareto distribution, motivated by QCD evolution equations. We found that Tsallis-Pareto distributions fit well high-pT data, in the wide center of mass energy range. Deviance from the fits appears at p T > 20-30 GeV/c, especially on CDF data. Introducing a pT-scaling ansatz, the fits at low and intermediate transverse momenta still remain good, and the deviations tend to disappear at the highest-pT data.

  20. A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2015-01-01

    Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.

  1. The Reduction of Modal Sensor Channels through a Pareto Chart Methodology

    Directory of Open Access Journals (Sweden)

    Kaci J. Lemler

    2015-01-01

    Full Text Available Presented herein is a new experimental sensor placement procedure developed to assist in placing sensors in key locations in an efficient method to reduce the number of channels for a full modal analysis. It is a fast, noncontact method that uses a laser vibrometer to gather a candidate set of sensor locations. These locations are then evaluated using a Pareto chart to obtain a reduced set of sensor locations that still captures the motion of the structure. The Pareto chart is employed to identify the points on a structure that have the largest reaction to an input excitation and thus reduce the number of channels while capturing the most significant data. This method enhances the correct and efficient placement of sensors which is crucial in modal testing. Previously this required the development and/or use of a complicated model or set of equations. This new technique is applied in a case study on a small unmanned aerial system. The test procedure is presented and the results are discussed.

  2. Optimal wage setting for an export oriented firm under labor taxes and labor mobility

    Directory of Open Access Journals (Sweden)

    Raúl Ponce Rodríguez

    2005-01-01

    Full Text Available In this paper it is developed a theoretical model to study the incentives that a labor tax might induce in terms of the optimal wage setting for an export oriented firm. In particular, we analyze the interaction of a labor tax that tends to reduce the wage due the firm is induced to shift backwards the tax burden to its employees minimizing the possible increase in the payroll costs and a fall of profits. However a lower wage might not be an optimal response to the establishment of a labor tax because it increases the labor turnover and as a result the firm faces both: an output’s opportunity cost and a labors turnover cost. The firm thus optimally decides to respond to the qualification and labor taxes by increasing the after tax wage.

  3. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    Science.gov (United States)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  4. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  5. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  6. A Method of Forming the Optimal Set of Disjoint Path in Computer Networks

    Directory of Open Access Journals (Sweden)

    As'ad Mahmoud As'ad ALNASER

    2017-04-01

    Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.

  7. Optimization of the primary collimator settings for fractionated IMRT stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Tobler, Matt; Leavitt, Dennis D.; Watson, Gordon

    2004-01-01

    Advances in field-shaping techniques for stereotactic radiosurgery/radiotherapy have allowed dynamic adjustment of field shape with gantry rotation (dynamic conformal arc) in an effort to minimize dose to critical structures. Recent work evaluated the potential for increased sparing of dose to normal tissues when the primary collimator setting is optimized to only the size necessary to cover the largest shape of the dynamic micro multi leaf field. Intensity-modulated radiotherapy (IMRT) is now a treatment option for patients receiving stereotactic radiotherapy treatments. This multisegmentation of the dose delivered through multiple fixed treatment fields provides for delivery of uniform dose to the tumor volume while allowing sparing of critical structures, particularly for patients whose tumor volumes are less suited for rotational treatment. For these segmented fields, the total number of monitor units (MUs) delivered may be much greater than the number of MUs required if dose delivery occurred through an unmodulated treatment field. As a result, undesired dose delivered, as leakage through the leaves to tissues outside the area of interest, will be proportionally increased. This work will evaluate the role of optimization of the primary collimator setting for these IMRT treatment fields, and compare these results to treatment fields where the primary collimator settings have not been optimized

  8. Application of HGSO to security based optimal placement and parameter setting of UPFC

    International Nuclear Information System (INIS)

    Tarafdar Hagh, Mehrdad; Alipour, Manijeh; Teimourzadeh, Saeed

    2014-01-01

    Highlights: • A new method for solving the security based UPFC placement and parameter setting problem is proposed. • The proposed method is a global method for all mixed-integer problems. • The proposed method has the ability of the parallel search in binary and continues space. • By using the proposed method, most of the problems due to line contingencies are solved. • Comparison studies are done to compare the performance of the proposed method. - Abstract: This paper presents a novel method to solve security based optimal placement and parameter setting of unified power flow controller (UPFC) problem based on hybrid group search optimization (HGSO) technique. Firstly, HGSO is introduced in order to solve mix-integer type problems. Afterwards, the proposed method is applied to the security based optimal placement and parameter setting of UPFC problem. The focus of the paper is to enhance the power system security through eliminating or minimizing the over loaded lines and the bus voltage limit violations under single line contingencies. Simulation studies are carried out on the IEEE 6-bus, IEEE 14-bus and IEEE 30-bus systems in order to verify the accuracy and robustness of the proposed method. The results indicate that by using the proposed method, the power system remains secure under single line contingencies

  9. Monopoly, Pareto and Ramsey mark-ups

    NARCIS (Netherlands)

    Ten Raa, T.

    2009-01-01

    Monopoly prices are too high. It is a price level problem, in the sense that the relative mark-ups have Ramsey optimal proportions, at least for independent constant elasticity demands. I show that this feature of monopoly prices breaks down the moment one demand is replaced by the textbook linear

  10. Application of the multicriterion optimization techniques and hierarchy of computational models to the research of ion acceleration due to laser-plasma interaction

    Science.gov (United States)

    Inovenkov, I. N.; Echkina, E. Yu.; Nefedov, V. V.; Ponomarenko, L. S.

    2017-12-01

    In this paper we discuss how a particles-in-cell computation code can be combined with methods of multicriterion optimization (in particular the Pareto optimal solutions of the multicriterion optimization problem) and a hierarchy of computational models approach to create an efficient tool for solving a wide array of problems related to the laser-plasma interaction. In case of the computational experiment the multicriterion optimization can be applied as follows: the researcher defines the objectives of the experiment - some computable scalar values (i.e. high kinetic energy of the ions leaving the domain, least possible number of electrons leaving domain in the given direction, etc). After that the parameters of the experiment which can be varied to achieve these objectives and the constrains on these parameters are chosen (e.g. amplitude and wave-length of the laser radiation, dimensions of the plasma slab(s)). The Pareto optimality of the vector of the parameters can be seen as this: x 0 is Pareto optimal if there exists no vector which would improve some criterion without causing a simultaneous degradation in at least one other criterion. These efficient set of parameter and constrains can be selected based on the preliminary calculations in the simplified models (one or two-dimensional) either analytical or numerical. The multistage computation of the Pareto set radically reduces the number of variants which are to be evaluated to achieve the given accuracy. During the final stage we further improve the results by recomputing some of the optimal variants on the finer grids, with more particles and/or in the frame of a more detailed model. As an example we have considered the ion acceleration caused by interaction of very intense and ultra-short laser pulses with plasmas and have calculated the optimal set of experiment parameters for optimizing number and average energy of high energy ions leaving the domain in the given direction and minimizing the expulsion

  11. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    Science.gov (United States)

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  12. Implementing of the multi-objective particle swarm optimizer and fuzzy decision-maker in exergetic, exergoeconomic and environmental optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza

    2011-01-01

    Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.

  13. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  14. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  15. Multi-objective optimization of glycopeptide antibiotic production in batch and fed batch processes

    DEFF Research Database (Denmark)

    Maiti, Soumen K.; Eliasson Lantz, Anna; Bhushan, Mani

    2011-01-01

    batch operations using process model for Amycolatopsis balhimycina, a glycopeptide antibiotic producer. This resulted in a set of several pareto optimal solutions with the two objectives ranging from (0.75gl−1, 3.97g$-1) to (0.44gl−1, 5.19g$-1) for batch and from (1.5gl−1, 5.46g$-1) to (1.1gl−1, 6.34g...

  16. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  17. Quality of Gaussian basis sets: direct optimization of orbital exponents by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.

    1975-01-01

    Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense

  18. A Pareto archive floating search procedure for solving multi-objective flexible job shop scheduling problem

    Directory of Open Access Journals (Sweden)

    J. S. Sadaghiani

    2014-04-01

    Full Text Available Flexible job shop scheduling problem is a key factor of using efficiently in production systems. This paper attempts to simultaneously optimize three objectives including minimization of the make span, total workload and maximum workload of jobs. Since the multi objective flexible job shop scheduling problem is strongly NP-Hard, an integrated heuristic approach has been used to solve it. The proposed approach was based on a floating search procedure that has used some heuristic algorithms. Within floating search procedure utilize local heuristic algorithms; it makes the considered problem into two sections including assigning and sequencing sub problem. First of all search is done upon assignment space achieving an acceptable solution and then search would continue on sequencing space based on a heuristic algorithm. This paper has used a multi-objective approach for producing Pareto solution. Thus proposed approach was adapted on NSGA II algorithm and evaluated Pareto-archives. The elements and parameters of the proposed algorithms were adjusted upon preliminary experiments. Finally, computational results were used to analyze efficiency of the proposed algorithm and this results showed that the proposed algorithm capable to produce efficient solutions.

  19. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    Science.gov (United States)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  20. Using Coevolution Genetic Algorithm with Pareto Principles to Solve Project Scheduling Problem under Duration and Cost Constraints

    Directory of Open Access Journals (Sweden)

    Alexandr Victorovich Budylskiy

    2014-06-01

    Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.

  1. Geminal embedding scheme for optimal atomic basis set construction in correlated calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sorella, S., E-mail: sorella@sissa.it [International School for Advanced Studies (SISSA), Via Beirut 2-4, 34014 Trieste, Italy and INFM Democritos National Simulation Center, Trieste (Italy); Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr [Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France); Mazzola, G., E-mail: gmazzola@phys.ethz.ch [Theoretische Physik, ETH Zurich, 8093 Zurich (Switzerland); Casula, M., E-mail: michele.casula@impmc.upmc.fr [CNRS and Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, Université Pierre et Marie Curie, Case 115, 4 Place Jussieu, 75252 Paris Cedex 05 (France)

    2015-12-28

    We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wave function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.

  2. Structure Optimal Design of Electromagnetic Levitation Load Reduction Device for Hydroturbine Generator Set

    Directory of Open Access Journals (Sweden)

    Qingyan Wang

    2015-01-01

    Full Text Available Thrust bearing is one part with the highest failure rate in hydroturbine generator set, which is primarily due to heavy axial load. Such heavy load often makes oil film destruction, bearing friction, and even burning. It is necessary to study the load and the reduction method. The dynamic thrust is an important factor to influence the axial load and reduction design of electromagnetic device. Therefore, in the paper, combined with the structure features of vertical turbine, the hydraulic thrust is analyzed accurately. Then, take the turbine model HL-220-LT-550, for instance; the electromagnetic levitation load reduction device is designed, and its mathematical model is built, whose purpose is to minimize excitation loss and total quality under the constraints of installation space, connection layout, and heat dissipation. Particle swarm optimization (PSO is employed to search for the optimum solution; finally, the result is verified by finite element method (FEM, which demonstrates that the optimized structure is more effective.

  3. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  4. Income inequality in Romania: The exponential-Pareto distribution

    Science.gov (United States)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  5. Pareto-depth for multiple-query image retrieval.

    Science.gov (United States)

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  6. [Origination of Pareto distribution in complex dynamic systems].

    Science.gov (United States)

    Chernavskiĭ, D S; Nikitin, A P; Chernavskaia, O D

    2008-01-01

    The Pareto distribution, whose probability density function can be approximated at sufficiently great chi as rho(chi) - chi(-alpha), where alpha > or = 2, is of crucial importance from both the theoretical and practical point of view. The main reason is its qualitative distinction from the normal (Gaussian) distribution. Namely, the probability of high deviations appears to be significantly higher. The conception of the universal applicability of the Gauss law remains to be widely distributed despite the lack of objective confirmation of this notion in a variety of application areas. The origin of the Pareto distribution in dynamic systems located in the gaussian noise field is considered. A simple one-dimensional model is discussed where the system response in a rather wide interval of the variable can be quite precisely approximated by this distribution.

  7. On the optimal identification of tag sets in time-constrained RFID configurations.

    Science.gov (United States)

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  8. Using the Pareto Distribution to Improve Estimates of Topcoded Earnings

    OpenAIRE

    Philip Armour; Richard V. Burkhauser; Jeff Larrimore

    2014-01-01

    Inconsistent censoring in the public-use March Current Population Survey (CPS) limits its usefulness in measuring labor earnings trends. Using Pareto estimation methods with less-censored internal CPS data, we create an enhanced cell-mean series to capture top earnings in the public-use CPS. We find that previous approaches for imputing topcoded earnings systematically understate top earnings. Annual earnings inequality trends since 1963 using our series closely approximate those found by Kop...

  9. Accelerated life testing design using geometric process for pareto distribution

    OpenAIRE

    Mustafa Kamal; Shazia Zarrin; Arif Ul Islam

    2013-01-01

    In this paper the geometric process is used for the analysis of accelerated life testing under constant stress for Pareto Distribution. Assuming that the lifetimes under increasing stress levels form a geometric process, estimates of the parameters are obtained by using the maximum likelihood method for complete data. In addition, asymptotic interval estimates of the parameters of the distribution using Fisher information matrix are also obtained. The statistical properties of the parameters ...

  10. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  11. Optimal power system generation scheduling by multi-objective genetic algorithms with preferences

    International Nuclear Information System (INIS)

    Zio, E.; Baraldi, P.; Pedroni, N.

    2009-01-01

    Power system generation scheduling is an important issue both from the economical and environmental safety viewpoints. The scheduling involves decisions with regards to the units start-up and shut-down times and to the assignment of the load demands to the committed generating units for minimizing the system operation costs and the emission of atmospheric pollutants. As many other real-world engineering problems, power system generation scheduling involves multiple, conflicting optimization criteria for which there exists no single best solution with respect to all criteria considered. Multi-objective optimization algorithms, based on the principle of Pareto optimality, can then be designed to search for the set of nondominated scheduling solutions from which the decision-maker (DM) must a posteriori choose the preferred alternative. On the other hand, often, information is available a priori regarding the preference values of the DM with respect to the objectives. When possible, it is important to exploit this information during the search so as to focus it on the region of preference of the Pareto-optimal set. In this paper, ways are explored to use this preference information for driving a multi-objective genetic algorithm towards the preferential region of the Pareto-optimal front. Two methods are considered: the first one extends the concept of Pareto dominance by biasing the chromosome replacement step of the algorithm by means of numerical weights that express the DM' s preferences; the second one drives the search algorithm by changing the shape of the dominance region according to linear trade-off functions specified by the DM. The effectiveness of the proposed approaches is first compared on a case study of literature. Then, a nonlinear, constrained, two-objective power generation scheduling problem is effectively tackled

  12. The optimal design of UAV wing structure

    Science.gov (United States)

    Długosz, Adam; Klimek, Wiktor

    2018-01-01

    The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.

  13. Global shape optimization of airfoil using multi-objective genetic algorithm

    International Nuclear Information System (INIS)

    Lee, Ju Hee; Lee, Sang Hwan; Park, Kyoung Woo

    2005-01-01

    The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model

  14. Global shape optimization of airfoil using multi-objective genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju Hee; Lee, Sang Hwan [Hanyang Univ., Seoul (Korea, Republic of); Park, Kyoung Woo [Hoseo Univ., Asan (Korea, Republic of)

    2005-10-01

    The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model.

  15. Training set optimization and classifier performance in a top-down diabetic retinopathy screening system

    Science.gov (United States)

    Wigdahl, J.; Agurto, C.; Murray, V.; Barriga, S.; Soliz, P.

    2013-03-01

    Diabetic retinopathy (DR) affects more than 4.4 million Americans age 40 and over. Automatic screening for DR has shown to be an efficient and cost-effective way to lower the burden on the healthcare system, by triaging diabetic patients and ensuring timely care for those presenting with DR. Several supervised algorithms have been developed to detect pathologies related to DR, but little work has been done in determining the size of the training set that optimizes an algorithm's performance. In this paper we analyze the effect of the training sample size on the performance of a top-down DR screening algorithm for different types of statistical classifiers. Results are based on partial least squares (PLS), support vector machines (SVM), k-nearest neighbor (kNN), and Naïve Bayes classifiers. Our dataset consisted of digital retinal images collected from a total of 745 cases (595 controls, 150 with DR). We varied the number of normal controls in the training set, while keeping the number of DR samples constant, and repeated the procedure 10 times using randomized training sets to avoid bias. Results show increasing performance in terms of area under the ROC curve (AUC) when the number of DR subjects in the training set increased, with similar trends for each of the classifiers. Of these, PLS and k-NN had the highest average AUC. Lower standard deviation and a flattening of the AUC curve gives evidence that there is a limit to the learning ability of the classifiers and an optimal number of cases to train on.

  16. Optimal structural inference of signaling pathways from unordered and overlapping gene sets.

    Science.gov (United States)

    Acharya, Lipi R; Judeh, Thair; Wang, Guangdi; Zhu, Dongxiao

    2012-02-15

    A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a 'search and score' network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and

  17. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    Science.gov (United States)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  18. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  19. COMPARING INTRA- AND INTERENVIRONMENTAL PARAMETERS OF OPTIMAL SETTING IN BREEDING EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Domagoj Šimić

    2004-06-01

    Full Text Available A series of biometrical and quantitative-genetic parameters, not well known in Croatia, are being used for the most important agronomic traits to determine optimal genotype setting within a location as well as among locations. Objectives of the study are to estimate and to compare 1 parameters of intra-environment setting (effective mean square error EMSE, in lattice design, relative efficiency RE, of lattice design LD, compared to randomized complete block design RCBD, and repeatability Rep, of a plot value, and 2 operative heritability h2, as a parameter of inter-environment setting in an experiment with 72 maize hybrids. Trials were set up at four environments (two locations in two years evaluating grain yield and stalk rot. EMSE values corresponded across environments for both traits, while the estimations for RE of LD varied inconsistently over environments and traits. Rep estimates were more different over environments than traits. Rep values did not correspond with h2 estimates: Rep estimates for stalk rot were higher than those for grain yield, while h2 for grain yield was higher than for stalk rot in all instances. Our results suggest that due to importance of genotype × environment interaction, there is a need for multienvironment trials for both traits. If the experiment framework should be reduced due to economic or other reasons, decreasing number of locations in a year rather than decreasing number of years of investigation is recommended.

  20. Optimal allocation and adaptive VAR control of PV-DG in distribution networks

    International Nuclear Information System (INIS)

    Fu, Xueqian; Chen, Haoyong; Cai, Runqing; Yang, Ping

    2015-01-01

    Highlights: • A methodology for optimal PV-DG allocation based on a combination of algorithms. • Dealing with the randomicity of solar power energy using CCSP. • Presenting a VAR control strategy to balance the technical demands. • Finding the Pareto solutions using MOPSO and SVM. • Evaluating the Pareto solutions using WRSR. - Abstract: The development of distributed generation (DG) has brought new challenges to power networks. One of them that catches extensive attention is the voltage regulation problem of distribution networks caused by DG. Optimal allocation of DG in distribution networks is another well-known problem being widely investigated. This paper proposes a new method for the optimal allocation of photovoltaic distributed generation (PV-DG) considering the non-dispatchable characteristics of PV units. An adaptive reactive power control model is introduced in PV-DG allocation as to balance the trade-off between the improvement of voltage quality and the minimization of power loss in a distribution network integrated with PV-DG units. The optimal allocation problem is formulated as a chance-constrained stochastic programming (CCSP) model for dealing with the randomness of solar power energy. A novel algorithm combining the multi-objective particle swarm optimization (MOPSO) with support vector machines (SVM) is proposed to find the Pareto front consisting of a set of possible solutions. The Pareto solutions are further evaluated using the weighted rank sum ratio (WRSR) method to help the decision-maker obtain the desired solution. Simulation results on a 33-bus radial distribution system show that the optimal allocation method can fully take into account the time-variant characteristics and probability distribution of PV-DG, and obtain the best allocation scheme

  1. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  2. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  3. Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Mattas Konstantinos

    2016-12-01

    Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.

  4. Multi objective optimization of horizontal axis tidal current turbines, using Meta heuristics algorithms

    International Nuclear Information System (INIS)

    Tahani, Mojtaba; Babayan, Narek; Astaraei, Fatemeh Razi; Moghadam, Ali

    2015-01-01

    Highlights: • The performance of four different Meta heuristic optimization algorithms was studied. • Power coefficient and produced torque on stationary blade were selected as objective functions. • Chord and twist distributions were selected as decision variables. • All optimization algorithms were combined with blade element momentum theory. • The best Pareto front was obtained by multi objective flower pollination algorithm for HATCTs. - Abstract: The performance of horizontal axis tidal current turbines (HATCT) strongly depends on their geometry. According to this fact, the optimum performance will be achieved by optimized geometry. In this research study, the multi objective optimization of the HATCT is carried out by using four different multi objective optimization algorithms and their performance is evaluated in combination with blade element momentum theory (BEM). The second version of non-dominated sorting genetic algorithm (NSGA-II), multi objective particle swarm optimization algorithm (MOPSO), multi objective cuckoo search algorithm (MOCS) and multi objective flower pollination algorithm (MOFPA) are the selected algorithms. The power coefficient and the produced torque on stationary blade are selected as objective functions and chord and twist distributions along the blade span are selected as decision variables. These algorithms are combined with the blade element momentum (BEM) theory for the purpose of achieving the best Pareto front. The obtained Pareto fronts are compared with each other. Different sets of experiments are carried out by considering different numbers of iterations, population size and tip speed ratios. The Pareto fronts which are achieved by MOFPA and NSGA-II have better quality in comparison to MOCS and MOPSO, but on the other hand a detail comparison between the first fronts of MOFPA and NSGA-II indicated that MOFPA algorithm can obtain the best Pareto front and can maximize the power coefficient up to 4.3% and the

  5. Optimal energy window setting depending on the energy resolution for radionuclides used in gamma camera imaging. Planar imaging evaluation

    International Nuclear Information System (INIS)

    Kojima, Akihiro; Watanabe, Hiroyuki; Arao, Yuichi; Kawasaki, Masaaki; Takaki, Akihiro; Matsumoto, Masanori

    2007-01-01

    In this study, we examined whether the optimal energy window (EW) setting depending on an energy resolution of a gamma camera, which we previously proposed, is valid on planar scintigraphic imaging using Tl-201, Ga-67, Tc-99m, and I-123. Image acquisitions for line sources and paper sheet phantoms containing each radionuclide were performed in air and with scattering materials. For the six photopeaks excluding the Hg-201 characteristic x-rays' one, the conventional 20%-width energy window (EW20%) setting and the optimal energy window (optimal EW) setting (15%-width below 100 keV and 13%-width above 100 keV) were compared. For the Hg-201 characteristic x-rays' photopeak, the conventional on-peak EW20% setting was compared with the off-peak EW setting (73 keV-25%) and the wider off-peak EW setting (77 keV-29%). Image-count ratio (defined as the ratio of the image counts obtained with an EW and the total image counts obtained with the EW covered the whole photopeak for a line source in air), image quality, spatial resolutions (full width half maximum (FWHM) and full width tenth maximum (FWTM) values), count-profile curves, and defect-contrast values were compared between the conventional EW setting and the optimal EW setting. Except for the Hg-201 characteristic x-rays, the image-count ratios were 94-99% for the EW20% setting, but 78-89% for the optimal EW setting. However, the optimal EW setting reduced scatter fraction (defined as the scattered-to-primary counts ratio) effectively, as compared with the EW20% setting. Consequently, all the images with the optimal EW setting gave better image quality than ones with the EW20% setting. For the Hg-201 characteristic x-rays, the off-peak EW setting showed great improvement in image quality in comparison with the EW20% setting and the wider off-peak EW setting gave the best results. In conclusion, from our planar imaging study it was shown that although the optimal EW setting proposed by us gives less image-count ratio by

  6. Assessing the optimality of ASHRAE climate zones using high resolution meteorological data sets

    Science.gov (United States)

    Fils, P. D.; Kumar, J.; Collier, N.; Hoffman, F. M.; Xu, M.; Forbes, W.

    2017-12-01

    Energy consumed by built infrastructure constitutes a significant fraction of the nation's energy budget. According to 2015 US Energy Information Agency report, 41% of the energy used in the US was going to residential and commercial buildings. Additional research has shown that 32% of commercial building energy goes into heating and cooling the building. The American National Standards Institute and the American Society of Heating Refrigerating and Air-Conditioning Engineers Standard 90.1 provides climate zones for current state-of-practice since heating and cooling demands are strongly influenced by spatio-temporal weather variations. For this reason, we have been assessing the optimality of the climate zones using high resolution daily climate data from NASA's DAYMET database. We analyzed time series of meteorological data sets for all ASHRAE climate zones between 1980-2016 inclusively. We computed the mean, standard deviation, and other statistics for a set of meteorological variables (solar radiation, maximum and minimum temperature)within each zone. By plotting all the zonal statistics, we analyzed patterns and trends in those data over the past 36 years. We compared the means of each zone to its standard deviation to determine the range of spatial variability that exist within each zone. If the band around the mean is too large, it indicates that regions in the zone experience a wide range of weather conditions and perhaps a common set of building design guidelines would lead to a non-optimal energy consumption scenario. In this study we have observed a strong variation in the different climate zones. Some have shown consistent patterns in the past 36 years, indicating that the zone was well constructed, while others have greatly deviated from their mean indicating that the zone needs to be reconstructed. We also looked at redesigning the climate zones based on high resolution climate data. We are using building simulations models like EnergyPlus to develop

  7. Application of Bayesian statistical decision theory to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab

  8. Optimization of the core configuration design using a hybrid artificial intelligence algorithm for research reactors

    International Nuclear Information System (INIS)

    Hedayat, Afshin; Davilu, Hadi; Barfrosh, Ahmad Abdollahzadeh; Sepanloo, Kamran

    2009-01-01

    To successfully carry out material irradiation experiments and radioisotope productions, a high thermal neutron flux at irradiation box over a desired life time of a core configuration is needed. On the other hand, reactor safety and operational constraints must be preserved during core configuration selection. Two main objectives and two safety and operational constraints are suggested to optimize reactor core configuration design. Suggested parameters and conditions are considered as two separate fitness functions composed of two main objectives and two penalty functions. This is a constrained and combinatorial type of a multi-objective optimization problem. In this paper, a fast and effective hybrid artificial intelligence algorithm is introduced and developed to reach a Pareto optimal set. The hybrid algorithm is composed of a fast and elitist multi-objective genetic algorithm (GA) and a fast fitness function evaluating system based on the cascade feed forward artificial neural networks (ANNs). A specific GA representation of core configuration and also special GA operators are introduced and used to overcome the combinatorial constraints of this optimization problem. A software package (Core Pattern Calculator 1) is developed to prepare and reform required data for ANNs training and also to revise the optimization results. Some practical test parameters and conditions are suggested to adjust main parameters of the hybrid algorithm. Results show that introduced ANNs can be trained and estimate selected core parameters of a research reactor very quickly. It improves effectively optimization process. Final optimization results show that a uniform and dense diversity of Pareto fronts are gained over a wide range of fitness function values. To take a more careful selection of Pareto optimal solutions, a revision system is introduced and used. The revision of gained Pareto optimal set is performed by using developed software package. Also some secondary operational

  9. Optimization of the core configuration design using a hybrid artificial intelligence algorithm for research reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hedayat, Afshin, E-mail: ahedayat@aut.ac.i [Department of Nuclear Engineering and Physics, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, P.O. Box 15875-4413, Tehran (Iran, Islamic Republic of); Reactor Research and Development School, Nuclear Science and Technology Research Institute (NSTRI), End of North Karegar Street, P.O. Box 14395-836, Tehran (Iran, Islamic Republic of); Davilu, Hadi [Department of Nuclear Engineering and Physics, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, P.O. Box 15875-4413, Tehran (Iran, Islamic Republic of); Barfrosh, Ahmad Abdollahzadeh [Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Avenue, P.O. Box 15875-4413, Tehran (Iran, Islamic Republic of); Sepanloo, Kamran [Reactor Research and Development School, Nuclear Science and Technology Research Institute (NSTRI), End of North Karegar Street, P.O. Box 14395-836, Tehran (Iran, Islamic Republic of)

    2009-12-15

    To successfully carry out material irradiation experiments and radioisotope productions, a high thermal neutron flux at irradiation box over a desired life time of a core configuration is needed. On the other hand, reactor safety and operational constraints must be preserved during core configuration selection. Two main objectives and two safety and operational constraints are suggested to optimize reactor core configuration design. Suggested parameters and conditions are considered as two separate fitness functions composed of two main objectives and two penalty functions. This is a constrained and combinatorial type of a multi-objective optimization problem. In this paper, a fast and effective hybrid artificial intelligence algorithm is introduced and developed to reach a Pareto optimal set. The hybrid algorithm is composed of a fast and elitist multi-objective genetic algorithm (GA) and a fast fitness function evaluating system based on the cascade feed forward artificial neural networks (ANNs). A specific GA representation of core configuration and also special GA operators are introduced and used to overcome the combinatorial constraints of this optimization problem. A software package (Core Pattern Calculator 1) is developed to prepare and reform required data for ANNs training and also to revise the optimization results. Some practical test parameters and conditions are suggested to adjust main parameters of the hybrid algorithm. Results show that introduced ANNs can be trained and estimate selected core parameters of a research reactor very quickly. It improves effectively optimization process. Final optimization results show that a uniform and dense diversity of Pareto fronts are gained over a wide range of fitness function values. To take a more careful selection of Pareto optimal solutions, a revision system is introduced and used. The revision of gained Pareto optimal set is performed by using developed software package. Also some secondary operational

  10. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  11. On the choice of an optimal value-set of qualitative attributes for information retrieval in databases

    International Nuclear Information System (INIS)

    Ryjov, A.; Loginov, D.

    1994-01-01

    The problem of choosing an optimal set of significances of qualitative attributes for information retrieval in databases is addressed. Given a particular database, a set of significances is called optimal if it results in the minimization of losses of information and information noise for information retrieval in the data base. Obviously, such a set of significances depends on the statistical parameters of the data base. The software, which enables to calculate on the basis of the statistical parameters of the given data base, the losses of information and the information noise for arbitrary sets of significances of qualitative attributes, is described. The software also permits to compare various sets of significances of qualitative attributes and to choose the optimal set of significances

  12. Chaotic improved PSO-based multi-objective optimization for minimization of power losses and L index in power systems

    International Nuclear Information System (INIS)

    Chen, Gonggui; Liu, Lilan; Song, Peizhu; Du, Yangwei

    2014-01-01

    Highlights: • New method for MOORPD problem using MOCIPSO and MOIPSO approaches. • Constrain-prior Pareto-dominance method is proposed to meet the constraints. • The limits of the apparent power flow of transmission line are considered. • MOORPD model is built up for MOORPD problem. • The achieved results by MOCIPSO and MOIPSO approaches are better than MOPSO method. - Abstract: Multi-objective optimal reactive power dispatch (MOORPD) seeks to not only minimize power losses, but also improve the stability of power system simultaneously. In this paper, the static voltage stability enhancement is achieved through incorporating L index in MOORPD problem. Chaotic improved PSO-based multi-objective optimization (MOCIPSO) and improved PSO-based multi-objective optimization (MOIPSO) approaches are proposed for solving complex multi-objective, mixed integer nonlinear problems such as minimization of power losses and L index in power systems simultaneously. In MOCIPSO and MOIPSO based optimization approaches, crossover operator is proposed to enhance PSO diversity and improve their global searching capability, and for MOCIPSO based optimization approach, chaotic sequences based on logistic map instead of random sequences is introduced to PSO for enhancing exploitation capability. In the two approaches, constrain-prior Pareto-dominance method (CPM) is proposed to meet the inequality constraints on state variables, the sorting and crowding distance methods are considered to maintain a well distributed Pareto optimal solutions, and moreover, fuzzy set theory is employed to extract the best compromise solution over the Pareto optimal curve. The proposed approaches have been examined and tested in the IEEE 30 bus and the IEEE 57 bus power systems. The performances of MOCIPSO, MOIPSO, and multi-objective PSO (MOPSO) approaches are compared with respect to multi-objective performance measures. The simulation results are promising and confirm the ability of MOCIPSO and

  13. On the size distribution of cities: an economic interpretation of the Pareto coefficient.

    Science.gov (United States)

    Suh, S H

    1987-01-01

    "Both the hierarchy and the stochastic models of size distribution of cities are analyzed in order to explain the Pareto coefficient by economic variables. In hierarchy models, it is found that the rate of variation in the productivity of cities and that in the probability of emergence of cities can explain the Pareto coefficient. In stochastic models, the productivity of cities is found to explain the Pareto coefficient. New city-size distribution functions, in which the Pareto coefficient is decomposed by economic variables, are estimated." excerpt

  14. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  15. Adaptive Conflict-Free Optimization of Rule Sets for Network Security Packet Filtering Devices

    Directory of Open Access Journals (Sweden)

    Andrea Baiocchi

    2015-01-01

    Full Text Available Packet filtering and processing rules management in firewalls and security gateways has become commonplace in increasingly complex networks. On one side there is a need to maintain the logic of high level policies, which requires administrators to implement and update a large amount of filtering rules while keeping them conflict-free, that is, avoiding security inconsistencies. On the other side, traffic adaptive optimization of large rule lists is useful for general purpose computers used as filtering devices, without specific designed hardware, to face growing link speeds and to harden filtering devices against DoS and DDoS attacks. Our work joins the two issues in an innovative way and defines a traffic adaptive algorithm to find conflict-free optimized rule sets, by relying on information gathered with traffic logs. The proposed approach suits current technology architectures and exploits available features, like traffic log databases, to minimize the impact of ACO development on the packet filtering devices. We demonstrate the benefit entailed by the proposed algorithm through measurements on a test bed made up of real-life, commercial packet filtering devices.

  16. Income dynamics with a stationary double Pareto distribution.

    Science.gov (United States)

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways--(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model--and obtain the same value of 8.4.

  17. Bayesian modeling to paired comparison data via the Pareto distribution

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2017-12-01

    Full Text Available A probabilistic approach to build models for paired comparison experiments based on the comparison of two Pareto variables is considered. Analysis of the proposed model is carried out in classical as well as Bayesian frameworks. Informative and uninformative priors are employed to accommodate the prior information. Simulation study is conducted to assess the suitablily and performance of the model under theoretical conditions. Appropriateness of fit of the is also carried out. Entire inferential procedure is illustrated by comparing certain cricket teams using real dataset.

  18. Optimal allocation of the limited oral cholera vaccine supply between endemic and epidemic settings.

    Science.gov (United States)

    Moore, Sean M; Lessler, Justin

    2015-10-06

    The World Health Organization (WHO) recently established a global stockpile of oral cholera vaccine (OCV) to be preferentially used in epidemic response (reactive campaigns) with any vaccine remaining after 1 year allocated to endemic settings. Hence, the number of cholera cases or deaths prevented in an endemic setting represents the minimum utility of these doses, and the optimal risk-averse response to any reactive vaccination request (i.e. the minimax strategy) is one that allocates the remaining doses between the requested epidemic response and endemic use in order to ensure that at least this minimum utility is achieved. Using mathematical models, we find that the best minimax strategy is to allocate the majority of doses to reactive campaigns, unless the request came late in the targeted epidemic. As vaccine supplies dwindle, the case for reactive use of the remaining doses grows stronger. Our analysis provides a lower bound for the amount of OCV to keep in reserve when responding to any request. These results provide a strategic context for the fulfilment of requests to the stockpile, and define allocation strategies that minimize the number of OCV doses that are allocated to suboptimal situations. © 2015 The Authors.

  19. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Science.gov (United States)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  20. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  1. Achieving Optimal Privacy in Trust-Aware Social Recommender Systems

    Science.gov (United States)

    Dokoohaki, Nima; Kaleli, Cihan; Polat, Huseyin; Matskin, Mihhail

    Collaborative filtering (CF) recommenders are subject to numerous shortcomings such as centralized processing, vulnerability to shilling attacks, and most important of all privacy. To overcome these obstacles, researchers proposed for utilization of interpersonal trust between users, to alleviate many of these crucial shortcomings. Till now, attention has been mainly paid to strong points about trust-aware recommenders such as alleviating profile sparsity or calculation cost efficiency, while least attention has been paid on investigating the notion of privacy surrounding the disclosure of individual ratings and most importantly protection of trust computation across social networks forming the backbone of these systems. To contribute to addressing problem of privacy in trust-aware recommenders, within this paper, first we introduce a framework for enabling privacy-preserving trust-aware recommendation generation. While trust mechanism aims at elevating recommender's accuracy, to preserve privacy, accuracy of the system needs to be decreased. Since within this context, privacy and accuracy are conflicting goals we show that a Pareto set can be found as an optimal setting for both privacy-preserving and trust-enabling mechanisms. We show that this Pareto set, when used as the configuration for measuring the accuracy of base collaborative filtering engine, yields an optimized tradeoff between conflicting goals of privacy and accuracy. We prove this concept along with applicability of our framework by experimenting with accuracy and privacy factors, and we show through experiment how such optimal set can be inferred.

  2. Several comparison result of two types of equilibrium (Pareto Schemes and Stackelberg Scheme) of game theory approach in probabilistic vendor – buyer supply chain system with imperfect quality

    Science.gov (United States)

    Setiawan, R.

    2018-05-01

    In this paper, Economic Order Quantity (EOQ) of the vendor-buyer supply-chain model under a probabilistic condition with imperfect quality items has been analysed. The analysis is delivered using two concepts in game theory approach, which is Stackelberg equilibrium and Pareto Optimal, under non-cooperative and cooperative games, respectively. Another result is getting acomparison of theoptimal result between integrated scheme and game theory approach based on analytical and numerical result using appropriate simulation data.

  3. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  4. Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

    Directory of Open Access Journals (Sweden)

    Peng Zuoxiang

    2010-01-01

    Full Text Available Let be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function as , where represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.

  5. The feasibility of using Pareto fronts for comparison of treatment planning systems and delivery techniques

    DEFF Research Database (Denmark)

    Ottosson, Rickard O; Engstrom, Per E; Sjöström, David

    2008-01-01

    constitute the Pareto front. The Pareto concept applies well to the inverse planning process, which involves inherently contradictory objectives, high and uniform target dose on one hand, and sparing of surrounding tissue and nearby organs at risk (OAR) on the other. Due to the specific characteristics...

  6. Vilfredo Pareto. L'economista alla luce delle lettere a Maffeo Pantaleoni. (Vilfredo Pareto. The economist in the light of his letters to Maffeo Pantaleoni

    Directory of Open Access Journals (Sweden)

    E. SCHNEIDER

    2014-07-01

    Full Text Available The article is part of a special issue on occasion of the publication of the entire scientific correspondence of Vilfredo Pareto with Maffeo Pantaleoni. The author reconstructs the beginning of their correspondence, the debate in pure mathematical economics and draws main conclusions on the different views of Pareto with respect to Marshal, Edgeworth and Fisher.JEL: B16, B31, C02, C60

  7. A guide to multi-objective optimization for ecological problems with an application to cackling goose management

    Science.gov (United States)

    Williams, Perry J.; Kendall, William L.

    2017-01-01

    Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution

  8. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    Science.gov (United States)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on

  9. Zipf's law and influential factors of the Pareto exponent of the city size distribution: Evidence from China

    OpenAIRE

    GAO Hongying; WU Kangping

    2007-01-01

    This paper estimates the Pareto exponent of the city size (population size and economy size) distribution, all provinces, and three regions in China in 1997, 2000 and 2003 by OLS, comparatively analyzes the Pareto exponent cross section and times, and empirically analyzes the factors which impacts on the Pareto exponents of provinces. Our analyses show that the size distributions of cities in China follow the Pareto distribution and are of structural features. Variations in the value of the P...

  10. Coordinated Voltage Control in Distribution Network with the Presence of DGs and Variable Loads Using Pareto and Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    José Raúl Castro

    2016-02-01

    Full Text Available This paper presents an efficient algorithm to solve the multi-objective (MO voltage control problem in distribution networks. The proposed algorithm minimizes the following three objectives: voltage variation on pilot buses, reactive power production ratio deviation, and generator voltage deviation. This work leverages two optimization techniques: fuzzy logic to find the optimum value of the reactive power of the distributed generation (DG and Pareto optimization to find the optimal value of the pilot bus voltage so that this produces lower losses under the constraints that the voltage remains within established limits. Variable loads and DGs are taken into account in this paper. The algorithm is tested on an IEEE 13-node test feeder and the results show the effectiveness of the proposed model.

  11. Existence theorem and optimality conditions for a class of convex semi-infinite problems with noncompact index sets

    Directory of Open Access Journals (Sweden)

    Olga Kostyukova

    2017-11-01

    Full Text Available The paper is devoted to study of a special class of semi-infinite problems arising in nonlinear parametric Semi-infinite Programming, when the differential properties of the solutions are being studied. These problems are convex and possess noncompact index sets. In the paper, we present conditions guaranteeing the existence of optimal solutions, and prove new optimality criterion. An example illustrating the obtained results is presented.

  12. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  13. Dictatorship, liberalism and the Pareto rule: Possible and impossible

    Directory of Open Access Journals (Sweden)

    Boričić Branislav

    2009-01-01

    Full Text Available The current economic crisis has shaken belief in the capacity of neoliberal 'free market' policies. Numerous supports of state intervention have arisen, and the interest for social choice theory has revived. In this paper we consider three standard properties for aggregating individual into social preferences: dictatorship, liberalism and the Pareto rule, and their formal negations. The context of the pure first-order classical logic makes it possible to show how some combinations of the above mentioned conditions, under the hypothesis of unrestricted domain, form simple and reasonable examples of possible or impossible social choice systems. Due to their simplicity, these examples, including the famous 'liberal paradox', could have a particular didactic value.

  14. Origin of Pareto-like spatial distributions in ecosystems.

    Science.gov (United States)

    Manor, Alon; Shnerb, Nadav M

    2008-12-31

    Recent studies of cluster distribution in various ecosystems revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this patch statistics is a manifestation of the law of proportionate effect. Mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (such as desertification) manifest themselves in a drastic change of the stability properties of spatial colonies.

  15. Optimization of GEANT4 settings for Proton Pencil Beam Scanning simulations using GATE

    Energy Technology Data Exchange (ETDEWEB)

    Grevillot, Loic, E-mail: loic.grevillot@gmail.co [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); IBA, B-1348 Louvain-la-Neuve (Belgium); Frisson, Thibault [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Zahra, Nabil [Universite de Lyon, F-69622 Lyon (France); IPNL, CNRS UMR 5822, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Bertrand, Damien; Stichelbaut, Frederic [IBA, B-1348 Louvain-la-Neuve (Belgium); Freud, Nicolas [Universite de Lyon, F-69622 Lyon (France); CNDRI, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Sarrut, David [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France)

    2010-10-15

    This study reports the investigation of different GEANT4 settings for proton therapy applications in the context of Treatment Planning System comparisons. The GEANT4.9.2 release was used through the GATE platform. We focused on the Pencil Beam Scanning delivery technique, which allows for intensity modulated proton therapy applications. The most relevant options and parameters (range cut, step size, database binning) for the simulation that influence the dose deposition were investigated, in order to determine a robust, accurate and efficient simulation environment. In this perspective, simulations of depth-dose profiles and transverse profiles at different depths and energies between 100 and 230 MeV have been assessed against reference measurements in water and PMMA. These measurements were performed in Essen, Germany, with the IBA dedicated Pencil Beam Scanning system, using Bragg-peak chambers and radiochromic films. GEANT4 simulations were also compared to the PHITS.2.14 and MCNPX.2.5.0 Monte Carlo codes. Depth-dose simulations reached 0.3 mm range accuracy compared to NIST CSDA ranges, with a dose agreement of about 1% over a set of five different energies. The transverse profiles simulated using the different Monte Carlo codes showed discrepancies, with up to 15% difference in beam widening between GEANT4 and MCNPX in water. A 8% difference between the GEANT4 multiple scattering and single scattering algorithms was observed. The simulations showed the inability of reproducing the measured transverse dose spreading with depth in PMMA, corroborating the fact that GEANT4 underestimates the lateral dose spreading. GATE was found to be a very convenient simulation environment to perform this study. A reference physics-list and an optimized parameters-list have been proposed. Satisfactory agreement against depth-dose profiles measurements was obtained. The simulation of transverse profiles using different Monte Carlo codes showed significant deviations. This point

  16. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    Science.gov (United States)

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of

  17. Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data

    Science.gov (United States)

    Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd

    2018-02-01

    In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.

  18. Multi-Objective Stochastic Optimization Programs for a Non-Life Insurance Company under Solvency Constraints

    Directory of Open Access Journals (Sweden)

    Massimiliano Kaucic

    2015-09-01

    Full Text Available In the paper, we introduce a multi-objective scenario-based optimization approach for chance-constrained portfolio selection problems. More specifically, a modified version of the normal constraint method is implemented with a global solver in order to generate a dotted approximation of the Pareto frontier for bi- and tri-objective programming problems. Numerical experiments are carried out on a set of portfolios to be optimized for an EU-based non-life insurance company. Both performance indicators and risk measures are managed as objectives. Results show that this procedure is effective and readily applicable to achieve suitable risk-reward tradeoff analysis.

  19. A synthetic layout optimization of discrete heat sources flush mounted on a laminar flow cooled flat plate based on the constructal law

    International Nuclear Information System (INIS)

    Shi, Zhongyuan; Dong, Tao

    2015-01-01

    Highlights: • A constructal thermohydraulic optimization was carried out. • The effect of manufacturing limit on the Pareto solution set was discussed. • The suitable constraints may differ from those on a quasi-continuous basis. - Abstract: A synthetic optimization is presented for the Pareto layouts of discrete heat sources (with uniform heat flux) flush mounted on a flat plate over which laminar flow serves for cooling purpose. The peak temperatures and the flow drag loss are minimizing simultaneously provided that the total heat dissipation rate and the plate length are held constant. The impact of the manufacturing limit, i.e. the minimum length of the heated or the adiabatic patch, on the optimum layout is discussed. The results in general comply with analytical deduction based on the constructal theory. However in a finite length scenario, geometric constraints on the adiabatic spacing differ from that fits the situation in which maximum heat transfer performance alone is to be achieved.

  20. METHOD FOR OPTIMAL RESOLUTION OF MULTI-AIRCRAFT CONFLICTS IN THREE-DIMENSIONAL SPACE

    Directory of Open Access Journals (Sweden)

    Denys Vasyliev

    2017-03-01

    Full Text Available Purpose: The risk of critical proximities of several aircraft and appearance of multi-aircraft conflicts increases under current conditions of high dynamics and density of air traffic. The actual problem is a development of methods for optimal multi-aircraft conflicts resolution that should provide the synthesis of conflict-free trajectories in three-dimensional space. Methods: The method for optimal resolution of multi-aircraft conflicts using heading, speed and altitude change maneuvers has been developed. Optimality criteria are flight regularity, flight economy and the complexity of maneuvering. Method provides the sequential synthesis of the Pareto-optimal set of combinations of conflict-free flight trajectories using multi-objective dynamic programming and selection of optimal combination using the convolution of optimality criteria. Within described method the following are defined: the procedure for determination of combinations of aircraft conflict-free states that define the combinations of Pareto-optimal trajectories; the limitations on discretization of conflict resolution process for ensuring the absence of unobservable separation violations. Results: The analysis of the proposed method is performed using computer simulation which results show that synthesized combination of conflict-free trajectories ensures the multi-aircraft conflict avoidance and complies with defined optimality criteria. Discussion: Proposed method can be used for development of new automated air traffic control systems, airborne collision avoidance systems, intelligent air traffic control simulators and for research activities.

  1. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2012-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  2. A two-level strategy to realize life-cycle production optimization in an operational setting

    NARCIS (Netherlands)

    Essen, van G.M.; Hof, Van den P.M.J.; Jansen, J.D.

    2013-01-01

    We present a two-level strategy to improve robustness against uncertainty and model errors in life-cycle flooding optimization. At the upper level, a physics-based large-scale reservoir model is used to determine optimal life-cycle injection and production profiles. At the lower level these profiles

  3. Multi-objective optimization design of air distribution of grate cooler by entropy generation minimization and genetic algorithm

    International Nuclear Information System (INIS)

    Shao, Wei; Cui, Zheng; Cheng, Lin

    2016-01-01

    Highlights: • A multi-objective optimization model of air distribution of grate cooler by genetic algorithm is proposed. • Pareto Front is obtained and validated by comparing with operating data. • Optimal schemes are compared and selected by engineering background. • Total power consumption after optimization decreases 61.10%. • Thickness of clinker on three grate plates is thinner. - Abstract: The cooling air distributions of grate cooler exercise a great influence on the clinker cooling efficiency and power consumption of cooling fans. A multi-objective optimization model of air distributions of grate cooler with cross-flow heat exchanger analogy is proposed in this paper. Firstly, thermodynamic and flow models of clinker cooling process is carried out. Then based on entropy generation minimization analysis, modified entropy generation numbers caused by heat transfer and pressure drop are chosen as objective functions respectively which optimized by genetic algorithm. The design variables are superficial velocities of air chambers and thicknesses of clinker layers on different grate plates. A set of Pareto optimal solutions which two objectives are optimized simultaneously is achieved. Scattered distributions of design variables resulting in the conflict between two objectives are brought out. The final optimal air distribution and thicknesses of clinker layers are selected from the Pareto optimal solutions based on power consumption of cooling fans minimization and validated by measurements. Compared with actual operating scheme, the total air volumes of optimized schemes decrease 2.4%, total power consumption of cooling fans decreases 61.1% and the outlet temperature of clinker decreases 122.9 °C which shows a remarkable energy-saving effect on energy consumption.

  4. Simulation of neuro-fuzzy model for optimization of combine header setting

    Directory of Open Access Journals (Sweden)

    S Zareei

    2016-09-01

    of reel tine bar from cutter bar and vertical distance of reel tine bar from cutter bar could be recommended according to minimize header loss. Conclusions In the final step, the designed controller was simulated in SIMULINK. The Controller can change setting of header components in order to their impaction gathering loss and in each step, compare gathering loss with optimal value and If it was more than optimum then change the settings again. The simulation results were evaluated satisfactory.

  5. Wind-break walls with optimized setting angles for natural draft dry cooling tower with vertical radiators

    International Nuclear Information System (INIS)

    Ma, Huan; Si, Fengqi; Kong, Yu; Zhu, Kangping; Yan, Wensheng

    2017-01-01

    Highlights: • Aerodynamic field around dry cooling tower is presented with numerical model. • Performances of cooling deltas are figured out by air inflow velocity analysis. • Setting angles of wind-break walls are optimized to improve cooling performance. • Optimized walls can reduce the interference on air inflow at low wind speeds. • Optimized walls create stronger outside secondary flow at high wind speeds. - Abstract: To get larger cooling performance enhancement for natural draft dry cooling tower with vertical cooling deltas under crosswind, setting angles of wind-break walls were optimized. Considering specific structure of each cooling delta, an efficient numerical model was established and validated by some published results. Aerodynamic fields around cooling deltas under various crosswind speeds were presented, and outlet water temperatures of the two columns of cooling delta were exported as well. It was found that for each cooling delta, there was a difference in cooling performance between the two columns, which is closely related to the characteristic of main airflow outside the tower. Using the present model, air inflow deviation angles at cooling deltas’ inlet were calculated, and the effects of air inflow deviation on outlet water temperatures of the two columns for corresponding cooling delta were explained in detail. Subsequently, at cooling deltas’ inlet along radial direction of the tower, setting angles of wind-break walls were optimized equal to air inflow deviation angles when no airflow separation appeared outside the tower, while equal to zero when outside airflow separation occurred. In addition, wind-break walls with optimized setting angles were verified to be extremely effective, compared to the previous radial walls.

  6. Population health management as a strategy for creation of optimal healing environments in worksite and corporate settings.

    Science.gov (United States)

    Chapman, Larry S; Pelletier, Kenneth R

    2004-01-01

    This paper provides an (OHE) overview of a population health management (PHM) approach to the creation of optimal healing environments (OHEs) in worksite and corporate settings. It presents a framework for consideration as the context for potential research projects to examine the health, well-being, and economic effects of a set of newer "virtual" prevention interventions operating in an integrated manner in worksite settings. The main topics discussed are the fundamentals of PHM with basic terminology and core principles, a description of PHM core technology and implications of a PHM approach to creating OHEs.

  7. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    Science.gov (United States)

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  8. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    Science.gov (United States)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  9. Efficiency enhancement of a gas turbine cycle using an optimized tubular recuperative heat exchanger

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Mehrabipour, Reza

    2012-01-01

    A simple gas turbine cycle namely as the Kraftwerk Union AG unit including a Siemens gas turbine model V93.1 with 60 MW nominal power and 26.0% thermal efficiency utilized in the Fars power plant located is considered for the efficiency enhancement. A typical tubular vertical recuperative heat exchanger is designed in order to integrate into the cycle as an air pre-heater for thermal efficiency improvement. Thermal and geometric specifications of the recuperative heat exchanger are obtained in a multi-objective optimization process. The exergetic efficiency of the gas cycle is maximized while the payback time for the capital investment of the recuperator is minimized. Combination of these objectives and decision variables with suitable engineering and physical constraints makes a set of the MINLP optimization problem. Optimization programming is performed using the NSGA-II algorithm and Pareto optimal frontiers are obtained in three cases including the minimum, average and maximum ambient air temperatures. In each case, the final optimal solution has been selected using three decision-making approaches including the fuzzy Bellman-Zadeh, LINMAP and TOPSIS methods. It has been shown that the TOPSIS and LINMAP decision-makers when applied on the Pareto frontier which is obtained at average ambient air temperature yields best results in comparison to other cases. -- Highlights: ► A simple Brayton gas cycle is considered for the efficiency improvement by integrating of a recuperator. ► Objective functions based on thermodynamic and economic analysis are obtained. ► The payback time for the capital investment is minimized and the exergetic efficiency of the system is maximized. ► Pareto optimal frontiers at various site conditions are obtained. ► A final optimal configuration is found using various decision-making approaches.

  10. SU-E-T-628: A Cloud Computing Based Multi-Objective Optimization Method for Inverse Treatment Planning.

    Science.gov (United States)

    Na, Y; Suh, T; Xing, L

    2012-06-01

    Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.

  11. Accident investigation of construction sites in Qom city using Pareto chart (2009-2012

    Directory of Open Access Journals (Sweden)

    M. H. Beheshti

    2015-07-01

    .Conclusions: Employing Pareto charts as a method for analyzing and identification of accident causes can have an effective role in the management of work-related accidents, proper allocation of funds and time.

  12. The Bayesian statistical decision theory applied to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-11-01

    The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs

  13. Strong Convergence Bound of the Pareto Index Estimator under Right Censoring

    Directory of Open Access Journals (Sweden)

    Bao Tao

    2010-01-01

    Full Text Available Let {Xn,n≥1} be a sequence of positive independent and identically distributed random variables with common Pareto-type distribution function F(x=1−x−1/γlF(x as γ>0, where lF(x represents a slowly varying function at infinity. In this note we study the strong convergence bound of a kind of right censored Pareto index estimator under second-order regularly varying conditions.

  14. Social welfare and the Affordable Care Act: is it ever optimal to set aside comparative cost?

    Science.gov (United States)

    Mortimer, Duncan; Peacock, Stuart

    2012-10-01

    The creation of the Patient-Centered Outcomes Research Institute (PCORI) under the Affordable Care Act has set comparative effectiveness research (CER) at centre stage of US health care reform. Comparative cost analysis has remained marginalised and it now appears unlikely that the PCORI will require comparative cost data to be collected as an essential component of CER. In this paper, we review the literature to identify ethical and distributional objectives that might motivate calls to set priorities without regard to comparative cost. We then present argument and evidence to consider whether there is any plausible set of objectives and constraints against which priorities can be set without reference to comparative cost. We conclude that - to set aside comparative cost even after accounting for ethical and distributional constraints - would be truly to act as if money is no object. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Sensitivity analysis for decision-making using the MORE method-A Pareto approach

    International Nuclear Information System (INIS)

    Ravalico, Jakin K.; Maier, Holger R.; Dandy, Graeme C.

    2009-01-01

    Integrated Assessment Modelling (IAM) incorporates knowledge from different disciplines to provide an overarching assessment of the impact of different management decisions. The complex nature of these models, which often include non-linearities and feedback loops, requires special attention for sensitivity analysis. This is especially true when the models are used to form the basis of management decisions, where it is important to assess how sensitive the decisions being made are to changes in model parameters. This research proposes an extension to the Management Option Rank Equivalence (MORE) method of sensitivity analysis; a new method of sensitivity analysis developed specifically for use in IAM and decision-making. The extension proposes using a multi-objective Pareto optimal search to locate minimum combined parameter changes that result in a change in the preferred management option. It is demonstrated through a case study of the Namoi River, where results show that the extension to MORE is able to provide sensitivity information for individual parameters that takes into account simultaneous variations in all parameters. Furthermore, the increased sensitivities to individual parameters that are discovered when joint parameter variation is taken into account shows the importance of ensuring that any sensitivity analysis accounts for these changes.

  16. Pareto frontier analyses based decision making tool for transportation of hazardous waste

    International Nuclear Information System (INIS)

    Das, Arup; Mazumder, T.N.; Gupta, A.K.

    2012-01-01

    Highlights: ► Posteriori method using multi-objective approach to solve bi-objective routing problem. ► System optimization (with multiple source–destination pairs) in a capacity constrained network using non-dominated sorting. ► Tools like cost elasticity and angle based focus used to analyze Pareto frontier to aid stakeholders make informed decisions. ► A real life case study of Kolkata Metropolitan Area to explain the workability of the model. - Abstract: Transportation of hazardous wastes through a region poses immense threat on the development along its road network. The risk to the population, exposed to such activities, has been documented in the past. However, a comprehensive framework for routing hazardous wastes has often been overlooked. A regional Hazardous Waste Management scheme should incorporate a comprehensive framework for hazardous waste transportation. This framework would incorporate the various stakeholders involved in decision making. Hence, a multi-objective approach is required to safeguard the interest of all the concerned stakeholders. The objective of this study is to design a methodology for routing of hazardous wastes between the generating units and the disposal facilities through a capacity constrained network. The proposed methodology uses posteriori method with multi-objective approach to find non-dominated solutions for the system consisting of multiple origins and destinations. A case study of transportation of hazardous wastes in Kolkata Metropolitan Area has also been provided to elucidate the methodology.

  17. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  18. Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory.

    Science.gov (United States)

    Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark

    2017-07-01

    Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Optimization of a Solid-State Electron Spin Qubit Using Gate Set Tomography (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-10-13

    and addressedwhen the qubit is usedwithin a fault-tolerant quantum computation scheme. 1. Introduction One of themain challenges in the physical...supplied in the supplementarymaterial. Additionally, we have supplied the datafiles constructed from the experiments, alongwith the Python notebook used to...New J. Phys. 18 (2016) 103018 doi:10.1088/1367-2630/18/10/103018 PAPER Optimization of a solid-state electron spin qubit using gate set tomography

  20. Optimization of the size and shape of the set-in nozzle for a PWR reactor pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Murtaza, Usman Tariq, E-mail: maniiut@yahoo.com; Javed Hyder, M., E-mail: hyder@pieas.edu.pk

    2015-04-01

    Highlights: • The size and shape of the set-in nozzle of the RPV have been optimized. • The optimized nozzle ensure the reduction of the mass around 198 kg per nozzle. • The mass of the RPV should be minimized for better fracture toughness. - Abstract: The objective of this research work is to optimize the size and shape of the set-in nozzle for a typical reactor pressure vessel (RPV) of a 300 MW pressurized water reactor. The analysis was performed by optimizing the four design variables which control the size and shape of the nozzle. These variables are inner radius of the nozzle, thickness of the nozzle, taper angle at the nozzle-cylinder intersection, and the point where taper of the nozzle starts from. It is concluded that the optimum design of the nozzle is the one that minimizes the two conflicting state variables, i.e., the stress intensity (Tresca yield criterion) and the mass of the RPV.

  1. Aerodynamic multi-objective integrated optimization based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Jiangtao HUANG

    2017-08-01

    Full Text Available Based on improved multi-objective particle swarm optimization (MOPSO algorithm with principal component analysis (PCA methodology, an efficient high-dimension multi-objective optimization method is proposed, which, as the purpose of this paper, aims to improve the convergence of Pareto front in multi-objective optimization design. The mathematical efficiency, the physical reasonableness and the reliability in dealing with redundant objectives of PCA are verified by typical DTLZ5 test function and multi-objective correlation analysis of supercritical airfoil, and the proposed method is integrated into aircraft multi-disciplinary design (AMDEsign platform, which contains aerodynamics, stealth and structure weight analysis and optimization module. Then the proposed method is used for the multi-point integrated aerodynamic optimization of a wide-body passenger aircraft, in which the redundant objectives identified by PCA are transformed to optimization constraints, and several design methods are compared. The design results illustrate that the strategy used in this paper is sufficient and multi-point design requirements of the passenger aircraft are reached. The visualization level of non-dominant Pareto set is improved by effectively reducing the dimension without losing the primary feature of the problem.

  2. Optimal Switching Control of Burner Setting for a Compact Marine Boiler Design

    DEFF Research Database (Denmark)

    Solberg, Brian; Andersen, Palle; Maciejowski, Jan M.

    2010-01-01

    This paper discusses optimal control strategies for switching between different burner modes in a novel compact  marine boiler design. The ideal behaviour is defined in a performance index the minimisation of which defines an ideal trade-off between deviations in boiler pressure and water level...... approach is based on a generalisation of hysteresis control. The strategies are verified on a simulation model of the compact marine boiler for control of low/high burner load switches.  ...

  3. The FERMI (at) Elettra Technical Optimization Study: Preliminary Parameter Set and Initial Studies

    International Nuclear Information System (INIS)

    Byrd, John; Corlett, John; Doolittle, Larry; Fawley, William; Lidia, Steven; Penn, Gregory; Ratti, Alex; Staples, John; Wilcox, Russell; Wurtele, Jonathan; Zholents, Alexander

    2005-01-01

    The goal of the FERMI (at) Elettra Technical Optimization Study is to produce a machine design and layout consistent with user needs for radiation in the approximate ranges 100 nm to 40 nm, and 40 nm to 10 nm, using seeded FEL's. The Study will involve collaboration between Italian and US physicists and engineers, and will form the basis for the engineering design and the cost estimation

  4. SETTING OF TASK OF OPTIMIZATION OF THE ACTIVITY OF A MACHINE-BUILDING CLUSTER COMPANY

    Directory of Open Access Journals (Sweden)

    A. V. Romanenko

    2014-01-01

    Full Text Available The work is dedicated to the development of methodological approaches to the management of machine-building enterprise on the basis of cost reduction, optimization of the portfolio of orders and capacity utilization in the process of operational management. Evaluation of economic efficiency of such economic entities of the real sector of the economy is determined, including the timing of orders, which depend on the issues of building a production facility, maintenance of fixed assets and maintain them at a given level. Formulated key components of economic-mathematical model of industrial activity and is defined as the optimization criterion. As proposed formula accumulating profits due to production capacity and technology to produce products current direct variable costs, the amount of property tax and expenses appearing as a result of manifestations of variance when performing replacement of production tasks for a single period of time. The main component of the optimization of the production activity of the enterprise on the basis of this criterion is the vector of direct variable costs. It depends on the number of types of products in the current portfolio of orders, production schedules production, the normative time for the release of a particular product available Fund time efficient production positions, the current valuation for certain groups of technological operations and the current priority of operations for the degree of readiness performed internal orders. Modeling of industrial activity based on the proposed provisions would allow the enterprises of machine-building cluster, active innovation, improve the efficient use of available production resources by optimizing current operations at the high uncertainty of the magnitude of the demand planning and carrying out maintenance and routine repairs.

  5. WE-AB-209-07: Explicit and Convex Optimization of Plan Quality Metrics in Intensity-Modulated Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Engberg, L; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To formulate objective functions of a multicriteria fluence map optimization model that correlate well with plan quality metrics, and to solve this multicriteria model by convex approximation. Methods: In this study, objectives of a multicriteria model are formulated to explicitly either minimize or maximize a dose-at-volume measure. Given the widespread agreement that dose-at-volume levels play important roles in plan quality assessment, these objectives correlate well with plan quality metrics. This is in contrast to the conventional objectives, which are to maximize clinical goal achievement by relating to deviations from given dose-at-volume thresholds: while balancing the new objectives means explicitly balancing dose-at-volume levels, balancing the conventional objectives effectively means balancing deviations. Constituted by the inherently non-convex dose-at-volume measure, the new objectives are approximated by the convex mean-tail-dose measure (CVaR measure), yielding a convex approximation of the multicriteria model. Results: Advantages of using the convex approximation are investigated through juxtaposition with the conventional objectives in a computational study of two patient cases. Clinical goals of each case respectively point out three ROI dose-at-volume measures to be considered for plan quality assessment. This is translated in the convex approximation into minimizing three mean-tail-dose measures. Evaluations of the three ROI dose-at-volume measures on Pareto optimal plans are used to represent plan quality of the Pareto sets. Besides providing increased accuracy in terms of feasibility of solutions, the convex approximation generates Pareto sets with overall improved plan quality. In one case, the Pareto set generated by the convex approximation entirely dominates that generated with the conventional objectives. Conclusion: The initial computational study indicates that the convex approximation outperforms the conventional objectives

  6. Multi-objective synthesis of work and heat exchange networks: Optimal balance between economic and environmental performance

    International Nuclear Information System (INIS)

    Onishi, Viviani C.; Ravagnani, Mauro A.S.S.; Jiménez, Laureano; Caballero, José A.

    2017-01-01

    Highlights: • New multi-objective optimization model for the simultaneous WHEN synthesis. • A multistage superstructure allows power and thermal integration of process streams. • Simultaneous minimization of environmental impacts and total annualized cost. • Alternative set of Pareto solutions is presented to support decision-makers. - Abstract: Sustainable and efficient energy use is crucial for lessening carbon dioxide emissions in industrial plants. This paper introduces a new multi-objective optimization model for the synthesis of work and heat exchange networks (WHENs), aiming to obtain the optimal balance between economic and environmental performance. The proposed multistage superstructure allows power and thermal integration of process gaseous streams, through the simultaneous minimization of total annualized cost (TAC) and environmental impacts (EI). The latter objective is determined by environmental indicators that follow the life cycle assessment (LCA) principles. The WHEN superstructure is optimized as a multi-objective mixed-integer nonlinear programming (moMINLP) model and solved with the GAMS software. Results show a decrease of ∼79% in the heat transfer area and ∼32% in the capital cost between the solutions found for single problem optimizations. These results represent a diminution of ∼23.5% in the TAC, while EI is increased in ∼99.2%. As these solutions can be impractical for economic or environmental reasons, we present a set of alternative Pareto-optimal solutions to support decision-makers towards the implementation of more environment-friendly and cost-effective WHENs.

  7. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  8. Performance Improvement of the Core Protection Calculator System (CPCS) by Introducing Optimal Function Sets

    International Nuclear Information System (INIS)

    Won, Byung Hee; Kim, Kyung O; Kim, Jong Kyung; Kim, Soon Young

    2012-01-01

    The Core Protection Calculator System (CPCS) is an automated device which is adopted to inspect the safety parameters such as Departure from Nuclear Boiling Ratio (DNBR) and Local Power Density (LPD) during normal operation. One function of the CPCS is to predict the axial power distributions using function sets in cubic spline method. Another function of that is to impose penalty when the estimated distribution by the spline method disagrees with embedded data in CPCS (i.e., over 8%). In conventional CPCS, restricted function sets are used to synthesize axial power shape, whereby it occasionally can draw a disagreement between synthesized data and the embedded data. For this reason, the study on improvement for power distributions synthesis in CPCS has been conducted in many countries. In this study, many function sets (more than 18,000 types) differing from the conventional ones were evaluated in each power shape. Matlab code was used for calculating/arranging the numerous cases of function sets. Their synthesis performance was also evaluated through error between conventional data and consequences calculated by new function sets

  9. Investigating multi-objective fluence and beam orientation IMRT optimization

    Science.gov (United States)

    Potrebko, Peter S.; Fiege, Jason; Biagioli, Matthew; Poleszczuk, Jan

    2017-07-01

    Radiation Oncology treatment planning requires compromises to be made between clinical objectives that are invariably in conflict. It would be beneficial to have a ‘bird’s-eye-view’ perspective of the full spectrum of treatment plans that represent the possible trade-offs between delivering the intended dose to the planning target volume (PTV) while optimally sparing the organs-at-risk (OARs). In this work, the authors demonstrate Pareto-aware radiotherapy evolutionary treatment optimization (PARETO), a multi-objective tool featuring such bird’s-eye-view functionality, which optimizes fluence patterns and beam angles for intensity-modulated radiation therapy (IMRT) treatment planning. The problem of IMRT treatment plan optimization is managed as a combined monolithic problem, where all beam fluence and angle parameters are treated equally during the optimization. To achieve this, PARETO is built around a powerful multi-objective evolutionary algorithm, called Ferret, which simultaneously optimizes multiple fitness functions that encode the attributes of the desired dose distribution for the PTV and OARs. The graphical interfaces within PARETO provide useful information such as: the convergence behavior during optimization, trade-off plots between the competing objectives, and a graphical representation of the optimal solution database allowing for the rapid exploration of treatment plan quality through the evaluation of dose-volume histograms and isodose distributions. PARETO was evaluated for two relatively complex clinical cases, a paranasal sinus and a pancreas case. The end result of each PARETO run was a database of optimal (non-dominated) treatment plans that demonstrated trade-offs between the OAR and PTV fitness functions, which were all equally good in the Pareto-optimal sense (where no one objective can be improved without worsening at least one other). Ferret was able to produce high quality solutions even though a large number of parameters

  10. The dosimetric impact of leaf interdigitation and leaf width on VMAT treatment planning in Pinnacle: comparing Pareto fronts

    International Nuclear Information System (INIS)

    Van Kesteren, Z; Janssen, T M; Damen, E; Van Vliet-Vroegindeweij, C

    2012-01-01

    To evaluate in an objective way the effect of leaf interdigitation and leaf width on volumetric modulated arc therapy plans in Pinnacle. Three multileaf collimators (MLCs) were modeled: two 10 mm leaf width MLCs, with and without interdigitating leafs, and a 5 mm leaf width MLC with interdigitating leafs. Three rectum patients and three prostate patients were used for the planning study. In order to compare treatment techniques in an objective way, a Pareto front comparison was carried out. 200 plans were generated in an automated way, per patient per MLC model, resulting in a total of 3600 plans. From these plans, Pareto-optimal plans were selected which were evaluated for various dosimetric variables. The capability of leaf interdigitation showed little dosimetric impact on the treatment plans, when comparing the 10 mm leaf width MLC with and without leaf interdigitation. When comparing the 10 mm leaf width MLC with the 5 mm leaf width MLC, both with interdigitating leafs, improvement in plan quality was observed. For both patient groups, the integral dose was reduced by 0.6 J for the thin MLC. For the prostate patients, the mean dose to the anal sphincter was reduced by 1.8 Gy and the conformity of the V 95% was reduced by 0.02 using the thin MLC. The V 65% of the rectum was reduced by 0.1% and the dose homogeneity with 1.5%. For rectum patients, the mean dose to the bowel was reduced by 1.4 Gy and the mean dose to the bladder with 0.8 Gy for the thin MLC. The conformity of the V 95% was equivalent for the 10 and 5 mm leaf width MLCs for the rectum patients. We have objectively compared three types of MLCs in a planning study for prostate and rectum patients by analyzing Pareto-optimal plans which were generated in an automated way. Interdigitation of MLC leafs does not generate better plans using the SmartArc algorithm in Pinnacle. Changing the MLC leaf width from 10 to 5 mm generates better treatment plans although the clinical relevance remains to be proven

  11. The dosimetric impact of leaf interdigitation and leaf width on VMAT treatment planning in Pinnacle: comparing Pareto fronts.

    Science.gov (United States)

    van Kesteren, Z; Janssen, T M; Damen, E; van Vliet-Vroegindeweij, C

    2012-05-21

    To evaluate in an objective way the effect of leaf interdigitation and leaf width on volumetric modulated arc therapy plans in Pinnacle. Three multileaf collimators (MLCs) were modeled: two 10 mm leaf width MLCs, with and without interdigitating leafs, and a 5 mm leaf width MLC with interdigitating leafs. Three rectum patients and three prostate patients were used for the planning study. In order to compare treatment techniques in an objective way, a Pareto front comparison was carried out. 200 plans were generated in an automated way, per patient per MLC model, resulting in a total of 3600 plans. From these plans, Pareto-optimal plans were selected which were evaluated for various dosimetric variables. The capability of leaf interdigitation showed little dosimetric impact on the treatment plans, when comparing the 10 mm leaf width MLC with and without leaf interdigitation. When comparing the 10 mm leaf width MLC with the 5 mm leaf width MLC, both with interdigitating leafs, improvement in plan quality was observed. For both patient groups, the integral dose was reduced by 0.6 J for the thin MLC. For the prostate patients, the mean dose to the anal sphincter was reduced by 1.8 Gy and the conformity of the V(95%) was reduced by 0.02 using the thin MLC. The V(65%) of the rectum was reduced by 0.1% and the dose homogeneity with 1.5%. For rectum patients, the mean dose to the bowel was reduced by 1.4 Gy and the mean dose to the bladder with 0.8 Gy for the thin MLC. The conformity of the V(95%) was equivalent for the 10 and 5 mm leaf width MLCs for the rectum patients. We have objectively compared three types of MLCs in a planning study for prostate and rectum patients by analyzing Pareto-optimal plans which were generated in an automated way. Interdigitation of MLC leafs does not generate better plans using the SmartArc algorithm in Pinnacle. Changing the MLC leaf width from 10 to 5 mm generates better treatment plans although the clinical relevance remains

  12. A multiobjective optimization framework for multicontaminant industrial water network design.

    Science.gov (United States)

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.

    Directory of Open Access Journals (Sweden)

    Sophie Bertrand

    Full Text Available How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD. GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS, both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1 providing a synthetic and pattern-oriented description of movement, (2 using top predators as ecosystem indicators and (3 studying the variability of spatial behaviour among species or among individuals with different personalities.

  14. Generalized Pareto for Pattern-Oriented Random Walk Modelling of Organisms' Movements.

    Science.gov (United States)

    Bertrand, Sophie; Joo, Rocío; Fablet, Ronan

    2015-01-01

    How organisms move and disperse is crucial to understand how population dynamics relates to the spatial heterogeneity of the environment. Random walk (RW) models are typical tools to describe movement patterns. Whether Lévy or alternative RW better describes forager movements is keenly debated. We get around this issue using the Generalized Pareto Distribution (GPD). GPD includes as specific cases Normal, exponential and power law distributions, which underlie Brownian, Poisson-like and Lévy walks respectively. Whereas previous studies typically confronted a limited set of candidate models, GPD lets the most likely RW model emerge from the data. We illustrate the wide applicability of the method using GPS-tracked seabird foraging movements and fishing vessel movements tracked by Vessel Monitoring System (VMS), both collected in the Peruvian pelagic ecosystem. The two parameters from the fitted GPD, a scale and a shape parameter, provide a synoptic characterization of the observed movement in terms of characteristic scale and diffusive property. They reveal and quantify the variability, among species and individuals, of the spatial strategies selected by predators foraging on a common prey field. The GPD parameters constitute relevant metrics for (1) providing a synthetic and pattern-oriented description of movement, (2) using top predators as ecosystem indicators and (3) studying the variability of spatial behaviour among species or among individuals with different personalities.

  15. Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)

    2010-06-15

    In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)

  16. Use of GIS to identify optimal settings for cancer prevention and control in African American communities

    Science.gov (United States)

    Alcaraz, Kassandra I.; Kreuter, Matthew W.; Bryan, Rebecca P.

    2009-01-01

    Objective Rarely have Geographic Information Systems (GIS) been used to inform community-based outreach and intervention planning. This study sought to identify community settings most likely to reach individuals from geographically localized areas. Method An observational study conducted in an urban city in Missouri during 2003–2007 placed computerized breast cancer education kiosks in seven types of community settings: beauty salons, churches, health fairs, neighborhood health centers, Laundromats, public libraries and social service agencies. We used GIS to measure distance between kiosk users’ (n=7,297) home ZIP codes and the location where they used the kiosk. Mean distances were compared across settings. Results Mean distance between individuals’ home ZIP codes and the location where they used the kiosk varied significantly (pLaundromats (2.3 miles) and public libraries (2.8 miles) and greatest among kiosk users at health fairs (7.6 miles). Conclusion Some community settings are more likely than others to reach highly localized populations. A better understanding of how and where to reach specific populations can complement the progress already being made in identifying populations at increased disease risk. PMID:19422844

  17. Training a whole-book LSTM-based recognizer with an optimal training set

    Science.gov (United States)

    Soheili, Mohammad Reza; Yousefi, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2018-04-01

    Despite the recent progress in OCR technologies, whole-book recognition, is still a challenging task, in particular in case of old and historical books, that the unknown font faces or low quality of paper and print contributes to the challenge. Therefore, pre-trained recognizers and generic methods do not usually perform up to required standards, and usually the performance degrades for larger scale recognition tasks, such as of a book. Such reportedly low error-rate methods turn out to require a great deal of manual correction. Generally, such methodologies do not make effective use of concepts such redundancy in whole-book recognition. In this work, we propose to train Long Short Term Memory (LSTM) networks on a minimal training set obtained from the book to be recognized. We show that clustering all the sub-words in the book, and using the sub-word cluster centers as the training set for the LSTM network, we can train models that outperform any identical network that is trained with randomly selected pages of the book. In our experiments, we also show that although the sub-word cluster centers are equivalent to about 8 pages of text for a 101- page book, a LSTM network trained on such a set performs competitively compared to an identical network that is trained on a set of 60 randomly selected pages of the book.

  18. Setting Optimal Bounds on Risk in Asset Allocation - a Convex Program

    Directory of Open Access Journals (Sweden)

    James E. Falk

    2002-10-01

    Full Text Available The 'Portfolio Selection Problem' is traditionally viewed as selecting a mix of investment opportunities that maximizes the expected return subject to a bound on risk. However, in reality, portfolios are made up of a few 'asset classes' that consist of similar opportunities. The asset classes are managed by individual `sub-managers', under guidelines set by an overall portfolio manager. Once a benchmark (the `strategic' allocation has been set, an overall manager may choose to allow the sub-managers some latitude in which opportunities make up the classes. He may choose some overall bound on risk (as measured by the variance and wish to set bounds that constrain the submanagers. Mathematically we show that the problem is equivalent to finding a hyper-rectangle of maximal volume within an ellipsoid. It is a convex program, albeit with potentially a large number of constraints. We suggest a cutting plane algorithm to solve the problem and include computational results on a set of randomly generated problems as well as a real-world problem taken from the literature.

  19. An Optimized, Grid Independent, Narrow Band Data Structure for High Resolution Level Sets

    DEFF Research Database (Denmark)

    Nielsen, Michael Bang; Museth, Ken

    2004-01-01

    enforced by the convex boundaries of an underlying cartesian computational grid. Here we present a novel very memory efficient narrow band data structure, dubbed the Sparse Grid, that enables the representation of grid independent high resolution level sets. The key features our new data structure are...

  20. Birds shed RNA-viruses according to the pareto principle.

    Science.gov (United States)

    Jankowski, Mark D; Williams, Christopher J; Fair, Jeanne M; Owen, Jennifer C

    2013-01-01

    A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian) - pathogen (RNA-virus) studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality) was 0.687 (0.036 SEM), and that 22.0% (0.90 SEM) of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.

  1. Birds shed RNA-viruses according to the pareto principle.

    Directory of Open Access Journals (Sweden)

    Mark D Jankowski

    Full Text Available A major challenge in disease ecology is to understand the role of individual variation of infection load on disease transmission dynamics and how this influences the evolution of resistance or tolerance mechanisms. Such information will improve our capacity to understand, predict, and mitigate pathogen-associated disease in all organisms. In many host-pathogen systems, particularly macroparasites and sexually transmitted diseases, it has been found that approximately 20% of the population is responsible for approximately 80% of the transmission events. Although host contact rates can account for some of this pattern, pathogen transmission dynamics also depend upon host infectiousness, an area that has received relatively little attention. Therefore, we conducted a meta-analysis of pathogen shedding rates of 24 host (avian - pathogen (RNA-virus studies, including 17 bird species and five important zoonotic viruses. We determined that viral count data followed the Weibull distribution, the mean Gini coefficient (an index of inequality was 0.687 (0.036 SEM, and that 22.0% (0.90 SEM of the birds shed 80% of the virus across all studies, suggesting an adherence of viral shedding counts to the Pareto Principle. The relative position of a bird in a distribution of viral counts was affected by factors extrinsic to the host, such as exposure to corticosterone and to a lesser extent reduced food availability, but not to intrinsic host factors including age, sex, and migratory status. These data provide a quantitative view of heterogeneous virus shedding in birds that may be used to better parameterize epidemiological models and understand transmission dynamics.

  2. Seeking deep convective parameter updates that improve tropical Pacific climatology in CESM using Pareto fronts

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2016-12-01

    Despite increasing complexity and process representation in global climate models (GCMs), accurate climate simulation is limited by uncertainties in sub-grid scale model physics, where cloud processes and precipitation occur, and the interaction with large-scale dynamics. Identifying highly sensitive parameters and constraining them against observations is therefore a valuable step in narrowing uncertainty. However, changes in parameterizations often improve some variables or aspects of the simulation while degrading others. This analysis addresses means of improving GCM simulation of present-day tropical Pacific climate in the face of these tradeoffs. Focusing on the deep convection scheme in the fully coupled Community Earth System Model (CESM) version 1, four parameters were systematically sampled, and a metamodel or model emulator was used to reconstruct the parameter space of this perturbed physics ensemble. Using this metamodel, a Pareto front is constructed to visualize multiobjective tradeoffs in model performance, and results highlight the most important aspects of model physics as well as the most sensitive parameter ranges. For example, parameter tradeoffs arise in the tropical Pacific where precipitation cannot improve without sea surface temperature getting worse. Tropical precipitation sensitivity is found to be highly nonlinear for low values of entrainment in convecting plumes, though it is fairly insensitive at the high end of the plausible range. Increasing the adjustment timescale for convective closure causes the centroid of tropical precipitation to vary as much as two degrees latitude, highlighting the effect these physics can have on large-scale features of the hydrological cycle. The optimization procedure suggests that simultaneously increasing the maximum downdraft mass flux fraction and the adjustment timescale can yield improvements to surface temperature and column water vapor without degrading the simulation of precipitation. These

  3. Role of pharmacists in optimizing the use of anticancer drugs in the clinical setting

    Directory of Open Access Journals (Sweden)

    Ma CSJ

    2014-02-01

    Full Text Available Carolyn SJ Ma Department of Pharmacy Practice, Daniel K. Inouye College of Pharmacy, University of Hawaii at Hilo, Honolulu, HI, USA Abstract: Oncology pharmacists, also known as oncology pharmacy specialists (OPSs have specialized knowledge of anticancer medications and their role in cancer. As essential member of the interdisciplinary team, OPSs optimize the benefits of drug therapy, help to minimize toxicities and work with patients on supportive care issues. The OPSs expanded role as experts in drug therapy extends to seven major key elements of medication management that include: selection, procurement, storage, preparation/dispensing, prescribing/dosing/transcribing, administration and monitoring/evaluation/education. As front line caregivers in hospital, ambulatory care, long-term care facilities, and community specialty pharmacies, the OPS also helps patients in areas of supportive care including nausea and vomiting, hematologic support, nutrition and infection control. This role helps the patient in the recovery phase between treatment cycles and adherence to chemotherapy treatment schedules essential for optimal treatment and outcome. Keywords: oncology pharmacist, oncology pharmacy specialist, medication management, chemotherapy

  4. Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis.

    Science.gov (United States)

    Perumpail, Brandon J; Li, Andrew A; Cholankeril, George; Kumari, Radhika; Ahmed, Aijaz

    2017-10-13

    The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.

  5. Optimizing the Nutritional Support of Adult Patients in the Setting of Cirrhosis

    Directory of Open Access Journals (Sweden)

    Brandon J. Perumpail

    2017-10-01

    Full Text Available Aim: The aim of this work is to develop a pragmatic approach in the assessment and management strategies of patients with cirrhosis in order to optimize the outcomes in this patient population. Method: A systematic review of literature was conducted through 8 July 2017 on the PubMed Database looking for key terms, such as malnutrition, nutrition, assessment, treatment, and cirrhosis. Articles and studies looking at associations between nutrition and cirrhosis were reviewed. Results: An assessment of malnutrition should be conducted in two stages: the first, to identify patients at risk for malnutrition based on the severity of liver disease, and the second, to perform a complete multidisciplinary nutritional evaluation of these patients. Optimal management of malnutrition should focus on meeting recommended daily goals for caloric intake and inclusion of various nutrients in the diet. The nutritional goals should be pursued by encouraging and increasing oral intake or using other measures, such as oral supplementation, enteral nutrition, or parenteral nutrition. Conclusions: Although these strategies to improve nutritional support have been well established, current literature on the topic is limited in scope. Further research should be implemented to test if this enhanced approach is effective.

  6. Exergoeconomic analysis and multi-objective optimization of an ejector refrigeration cycle powered by an internal combustion (HCCI) engine

    International Nuclear Information System (INIS)

    Sadeghi, Mohsen; Mahmoudi, S.M.S.; Khoshbakhti Saray, R.

    2015-01-01

    Highlights: • Ejector refrigeration systems powered by HCCI engine is proposed. • A new two-dimensional model is developed for the ejector. • Multi-objective optimization is performed for the proposed system. • Pareto frontier is plotted for multi-objective optimization. - Abstract: Ejector refrigeration systems powered by low-grade heat sources have been an attractive research subject for a lot of researchers. In the present work the waste heat from exhaust gases of a HCCI (homogeneous charge compression ignition) engine is utilized to drive the ejector refrigeration system. Considering the frictional effects on the ejector wall, a new two-dimensional model is developed for the ejector. Energy, exergy and exergoeconomic analysis performed for the proposed system using the MATLAB software. In addition, considering the exergy efficiency and the product unit cost of the system as objective functions, a multi-objective optimization is performed for the system to find the optimum design variables including the generator, condenser and evaporator temperatures. The product unit cost is minimized while the exergy efficiency is maximized using the genetic algorithm. The optimization results are obtained as a set of optimal points and the Pareto frontier is plotted for multi-objective optimization. The results of the optimization show that ejector refrigeration cycle is operating at optimum state based on exergy efficiency and product unit cost when generator, condenser and evaporator work at 94.54 °C, 33.44 °C and 0.03 °C, respectively

  7. The Role of eHealth in Optimizing Preventive Care in the Primary Care Setting.

    Science.gov (United States)

    Carey, Mariko; Noble, Natasha; Mansfield, Elise; Waller, Amy; Henskens, Frans; Sanson-Fisher, Rob

    2015-05-22

    Modifiable health risk behaviors such as smoking, overweight and obesity, risky alcohol consumption, physical inactivity, and poor nutrition contribute to a substantial proportion of the world's morbidity and mortality burden. General practitioners (GPs) play a key role in identifying and managing modifiable health risk behaviors. However, these are often underdetected and undermanaged in the primary care setting. We describe the potential of eHealth to help patients and GPs to overcome some of the barriers to managing health risk behaviors. In particular, we discuss (1) the role of eHealth in facilitating routine collection of patient-reported data on lifestyle risk factors, and (2) the role of eHealth in improving clinical management of identified risk factors through provision of tailored feedback, point-of-care reminders, tailored educational materials, and referral to online self-management programs. Strategies to harness the capacity of the eHealth medium, including the use of dynamic features and tailoring to help end users engage with, understand, and apply information need to be considered and maximized. Finally, the potential challenges in implementing eHealth solutions in the primary care setting are discussed. In conclusion, there is significant potential for innovative eHealth solutions to make a contribution to improving preventive care in the primary care setting. However, attention to issues such as data security and designing eHealth interfaces that maximize engagement from end users will be important to moving this field forward.

  8. Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport

    Directory of Open Access Journals (Sweden)

    David S. Haydon

    2018-02-01

    Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.

  9. Optimization of transversal phacoemulsification settings in peristaltic mode using a new transversal ultrasound machine.

    Science.gov (United States)

    Wright, Dannen D; Wright, Alex J; Boulter, Tyler D; Bernhisel, Ashlie A; Stagg, Brian C; Zaugg, Brian; Pettey, Jeff H; Ha, Larry; Ta, Brian T; Olson, Randall J

    2017-09-01

    To determine the optimum bottle height, vacuum, aspiration rate, and power settings in the peristaltic mode of the Whitestar Signature Pro machine with Ellips FX tip action (transversal). John A. Moran Eye Center Laboratories, University of Utah, Salt Lake City, Utah, USA. Experimental study. Porcine lens nuclei were hardened with formalin and cut into 2.0 mm cubes. Lens cubes were emulsified using transversal and fragment removal time (efficiency), and fragment bounces off the tip (chatter) were measured to determine optimum aspiration rate, bottle height, vacuum, and power settings in the peristaltic mode. Efficiency increased in a linear fashion with increasing bottle height and vacuum. The most efficient aspiration rate was 50 mL/min, with 60 mL/min statistically similar. Increasing power increased efficiency up to 90% with increased chatter at 100%. The most efficient values for the settings tested were bottle height at 100 cm, vacuum at 600 mm Hg, aspiration rate of 50 or 60 mL/min, and power at 90%. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  10. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Multi-objective particle swarm and genetic algorithm for the optimization of the LANSCE linac operation

    International Nuclear Information System (INIS)

    Pang, X.; Rybarcyk, L.J.

    2014-01-01

    Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster

  12. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    International Nuclear Information System (INIS)

    Salazar, Daniel; Rocco, Claudio M.; Galvan, Blas J.

    2006-01-01

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature

  13. Multi-objective particle swarm and genetic algorithm for the optimization of the LANSCE linac operation

    Energy Technology Data Exchange (ETDEWEB)

    Pang, X., E-mail: xpang@lanl.gov; Rybarcyk, L.J.

    2014-03-21

    Particle swarm optimization (PSO) and genetic algorithm (GA) are both nature-inspired population based optimization methods. Compared to GA, whose long history can trace back to 1975, PSO is a relatively new heuristic search method first proposed in 1995. Due to its fast convergence rate in single objective optimization domain, the PSO method has been extended to optimize multi-objective problems. In this paper, we will introduce the PSO method and its multi-objective extension, the MOPSO, apply it along with the MOGA (mainly the NSGA-II) to simulations of the LANSCE linac and operational set point optimizations. Our tests show that both methods can provide very similar Pareto fronts but the MOPSO converges faster.

  14. Optimization of constrained multiple-objective reliability problems using evolutionary algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es

    2006-09-15

    This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.

  15. Design for Sustainability of Industrial Symbiosis based on Emergy and Multi-objective Particle Swarm Optimization

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Liang, Hanwei; Dong, Liang

    2016-01-01

    approach for supporting decision-making in the design for the sustainability with the implementation of industrial symbiosis in chemical complex. Through incorporating the emergy theory, the model is formulated as a multi-objective approach that can optimize both the economic benefit and sustainable...... performance of the integrated industrial system. A set of emergy based evaluation index are designed. Multi-objective Particle Swarm Algorithm is proposed to solve the model, and the decision-makers are allowed to choose the suitable solutions form the Pareto solutions. An illustrative case has been studied...

  16. Bi and tri-objective optimization in the deterministic network interdiction problem

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Emmanuel Ramirez-Marquez, Jose; Salazar A, Daniel E.

    2010-01-01

    Solution approaches to the deterministic network interdiction problem have previously been developed for optimizing a single figure-of-merit of the network configuration (i.e. flow that can be transmitted between a source node and a sink node for a fixed network design) under constraints related to limited amount of resources available to interdict network links. These approaches work under the assumption that: (1) nominal capacity of each link is completely reduced when interdicted and (2) there is a single criterion to optimize. This paper presents a newly developed evolutionary algorithm that for the first time allows solving multi-objective optimization models for the design of network interdiction strategies that take into account a variety of figures-of-merit. The algorithm provides an approximation to the optimal Pareto frontier using: (a) techniques in Monte Carlo simulation to generate potential network interdiction strategies, (b) graph theory to analyze strategies' maximum source-sink flow and (c) an evolutionary search that is driven by the probability that a link will belong to the optimal Pareto set. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate and validate the approach.

  17. Multiobjective anatomy-based dose optimization for HDR-brachytherapy with constraint free deterministic algorithms

    International Nuclear Information System (INIS)

    Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.

    2002-01-01

    In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)

  18. An energy-saving set-point optimizer with a sliding mode controller for automotive air-conditioning/refrigeration systems

    International Nuclear Information System (INIS)

    Huang, Yanjun; Khajepour, Amir; Ding, Haitao; Bagheri, Farshid; Bahrami, Majid

    2017-01-01

    Highlights: • A novel two-layer energy-saving controller for automotive A/C-R system is developed. • A set-point optimizer at the outer loop is designed based on the steady state model. • A sliding mode controller in the inner loop is built. • Extensively experiments studies show that about 9% energy can be saving by this controller. - Abstract: This paper presents an energy-saving controller for automotive air-conditioning/refrigeration (A/C-R) systems. With their extensive application in homes, industry, and vehicles, A/C-R systems are consuming considerable amounts of energy. The proposed controller consists of two different time-scale layers. The outer or the slow time-scale layer called a set-point optimizer is used to find the set points related to energy efficiency by using the steady state model; whereas, the inner or the fast time-scale layer is used to track the obtained set points. In the inner loop, thanks to its robustness, a sliding mode controller (SMC) is utilized to track the set point of the cargo temperature. The currently used on/off controller is presented and employed as a basis for comparison to the proposed controller. More importantly, the real experimental results under several disturbed scenarios are analysed to demonstrate how the proposed controller can improve performance while reducing the energy consumption by 9% comparing with the on/off controller. The controller is suitable for any type of A/C-R system even though it is applied to an automotive A/C-R system in this paper.

  19. Community-based interventions to optimize early childhood development in low resource settings.

    Science.gov (United States)

    Maulik, P K; Darmstadt, G L

    2009-08-01

    Interventions targeting the early childhood period (0 to 3 years) help to improve neuro-cognitive functioning throughout life. Some of the more low cost, low resource-intensive community practices for this age-group are play, reading, music and tactile stimulation. This research was conducted to summarize the evidence regarding the effectiveness of such strategies on child development, with particular focus on techniques that may be transferable to developing countries and to children at risk of developing secondary impairments. PubMed, PsycInfo, Embase, ERIC, CINAHL and Cochrane were searched for studies involving the above strategies for early intervention. Reference lists of these studies were scanned and other studies were incorporated based on snow-balling. Overall, 76 articles corresponding to 53 studies, 24 of which were randomized controlled trials, were identified. Sixteen of those studies were from low- and middle-income countries. Play and reading were the two commonest interventions and showed positive impact on intellectual development of the child. Music was evaluated primarily in intensive care settings. Kangaroo Mother Care, and to a lesser extent massage, also showed beneficial effects. Improvement in parent-child interaction was common to all the interventions. Play and reading were effective interventions for early childhood interventions in low- and middle-income countries. More research is needed to judge the effectiveness of music. Kangaroo Mother Care is effective for low birth weight babies in resource poor settings, but further research is needed in community settings. Massage is useful, but needs more rigorous research prior to being advocated for community-level interventions.

  20. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Science.gov (United States)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  1. Decision Optimization of Machine Sets Taking Into Consideration Logical Tree Minimization of Design Guidelines

    Science.gov (United States)

    Deptuła, A.; Partyka, M. A.

    2014-08-01

    The method of minimization of complex partial multi-valued logical functions determines the degree of importance of construction and exploitation parameters playing the role of logical decision variables. Logical functions are taken into consideration in the issues of modelling machine sets. In multi-valued logical functions with weighting products, it is possible to use a modified Quine - McCluskey algorithm of multi-valued functions minimization. Taking into account weighting coefficients in the logical tree minimization reflects a physical model of the object being analysed much better

  2. Crop Evaluation System Optimization: Attribute Weights Determination Based on Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Ruihong Wang

    2017-01-01

    Full Text Available The present study is mainly a continuation of our previous study, which is about a crop evaluation system development that is based on grey relational analysis. In that system, the attribute weight determination affects the evaluation result directly. Attribute weight is usually ascertained by decision-makers experience knowledge. In this paper, we utilize rough sets theory to calculate attribute significance and then combine it with weight given by decision-maker. This method is a comprehensive consideration of subjective experience knowledge and objective situation; thus it can acquire much more ideal results. Finally, based on this method, we improve the system based on ASP.NET technology.

  3. OPTIMIZATION-BASED APPROACH TO TILING OF FINITE AREAS WITH ARBITRARY SETS OF WANG TILES

    Directory of Open Access Journals (Sweden)

    Marek Tyburec

    2017-11-01

    Full Text Available Wang tiles proved to be a convenient tool for the design of aperiodic tilings in computer graphics and in materials engineering. While there are several algorithms for generation of finite-sized tilings, they exploit the specific structure of individual tile sets, which prevents their general usage. In this contribution, we reformulate the NP-complete tiling generation problem as a binary linear program, together with its linear and semidefinite relaxations suitable for the branch and bound method. Finally, we assess the performance of the established formulations on generations of several aperiodic tilings reported in the literature, and conclude that the linear relaxation is better suited for the problem.

  4. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  5. Quantum dot nanoparticle for optimization of breast cancer diagnostics and therapy in a clinical setting.

    Science.gov (United States)

    Radenkovic, Dina; Kobayashi, Hisataka; Remsey-Semmelweis, Ernö; Seifalian, Alexander M

    2016-08-01

    Breast cancer is the most common cancer in the world. Sentinel lymph node (SLN) biopsy is used for staging of axillary lymph nodes. Organic dyes and radiocolloid are currently used for SLN mapping, but expose patients to ionizing radiation, are unstable during surgery and cause local tissue damage. Quantum dots (QD) could be used for SLN mapping without the need for biopsy. Surgical resection of the primary tumor is the optimal treatment for early-diagnosed breast cancer, but due to difficulties in defining tumor margins, cancer cells often remain leading to reoccurrences. Functionalized QD could be used for image-guided tumor resection to allow visualization of cancer cells. Near Infrared QD are photostable and have improved deep tissue penetration. Slow elimination of QD raises concerns of potential accumulation. Nevertheless, promising findings with cadmium-free QD in recent in vivo studies and first in-human trial suggest huge potential for cancer diagnostic and therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Multi-objective optimization of cooling air distributions of grate cooler with different clinker particles diameters and air chambers by genetic algorithm

    International Nuclear Information System (INIS)

    Shao, Wei; Cui, Zheng; Cheng, Lin

    2017-01-01

    Highlights: • A multi-objective optimization model of air distributions of grate cooler by genetic algorithm is proposed. • Optimal air distributions of different conditions are obtained and validated by measurements. • The most economic average diameters of clinker particles is 0.02 m. • The most economic amount of air chambers is 9. - Abstract: The paper proposes a multi-objective optimization model of cooling air distributions of grate cooler in cement plant based on convective heat transfer principle and entropy generation minimization analysis. The heat transfer and flow models of clinker cooling process are brought out at first. Then the modified entropy generation numbers caused by heat transfer and viscous dissipation are considered as objective functions respectively which are optimized by genetic algorithm simultaneously. The design variables are superficial velocities of air chambers and thicknesses of clinker layer on different grate plates. The model is verified by a set of Pareto optimal solutions and scattered distributions of design variables. Sensitive analysis of average diameters of clinker particles and amount of air chambers are carried out based on the optimization model. The optimal cooling air distributions are compared by heat recovered, energy consumption of cooling fans and heat efficiency of grate cooler. And all of them are selected from the Pareto optimal solutions based on energy consumption of cooling fans minimization. The results show that the most effective and economic average diameter of clinker particles is 0.02 m and the amount of air chambers is 9.

  7. Optimal set of agri-environmental indicators for the agricultural sector of Czech Republic

    Directory of Open Access Journals (Sweden)

    Jiří Hřebíček

    2013-01-01

    Full Text Available Current trends of agri-environmental indicators evaluation (i.e., the measurement of environmental performance and farm reporting are discussed in the paper focusing on the agriculture sector. From the perspective of agricultural policy, there are two broad decisions to make: which indicators to recommend and promote to farmers, and which indicators to collect to assist in agriculture policy-making. We introduce several general approaches for indicators to collect to assist in policy-making (European Union, Organization for Economic Cooperation and Development and Food and Agriculture Organization of the United Nations in the first part of our paper and given the differences in decision-making problems faced by these sets of decision makers. We continue in the second part of the paper with a proposal of indicators to recommend and promote to farmers in the Czech Republic.

  8. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    International Nuclear Information System (INIS)

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein

  9. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    Energy Technology Data Exchange (ETDEWEB)

    Foadi, James [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Aller, Pierre [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Alguel, Yilmaz; Cameron, Alex [Imperial College, London SW7 2AZ (United Kingdom); Axford, Danny; Owen, Robin L. [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Armour, Wes [Oxford e-Research Centre (OeRC), Keble Road, Oxford OX1 3QG (United Kingdom); Waterman, David G. [Research Complex at Harwell (RCaH), Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0FA (United Kingdom); Iwata, So [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom); Imperial College, London SW7 2AZ (United Kingdom); Evans, Gwyndaf, E-mail: gwyndaf.evans@diamond.ac.uk [Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire OX11 0DE (United Kingdom)

    2013-08-01

    A systematic approach to the scaling and merging of data from multiple crystals in macromolecular crystallography is introduced and explained. The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  10. Biobjective Optimization of Vibration Performance of Steel-Spring Floating Slab Tracks by Four-Pole Parameter Method Coupled with Ant Colony Optimization

    Directory of Open Access Journals (Sweden)

    Hao Jin

    2015-01-01

    Full Text Available Steel-spring floating slab tracks are one of the most effective methods to reduce vibrations from underground railways, which has drawn more and more attention in scientific communities. In this paper, the steel-spring floating slab track located in Track Vibration Abatement and Control Laboratory was modeled with four-pole parameter method. The influences of the fastener damping ratio, the fastener stiffness, the steel-spring damping ratio, and the steel-spring stiffness were researched for the rail displacement and the foundation acceleration. Results show that the rail displacement and the foundation acceleration will decrease with the increase of the fastener stiffness or the steel-spring damping ratio. However, the rail displacement and the foundation acceleration have the opposite variation tendency for the fastener damping ratio and the steel-spring stiffness. In order to optimize the rail displacement and the foundation acceleration affected by the fastener damping ratio and the steel-spring stiffness at the same time, a multiobjective ant colony optimization (ACO was employed. Eventually, Pareto optimal frontier of the rail displacement and the foundation acceleration was derived. Furthermore, the desirable values of the fastener damping ratio and the steel-spring stiffness can be obtained according to the corresponding Pareto optimal solution set.

  11. Universal cervical length screening for singleton pregnancies with no history of preterm delivery, or the inverse of the Pareto principle.

    Science.gov (United States)

    Rozenberg, P

    2017-06-01

    Ultrasound measurement of cervical length in the general population enables the identification of women at risk for spontaneous preterm delivery. Vaginal progesterone is effective in reducing the risk of preterm delivery in this population. This screening associated with treatment by vaginal progesterone is cost-effective. Universal screening of cervical length can therefore be considered justified. Nonetheless, this screening will not appreciably reduce the preterm birth prevalence: in France or UK, where the preterm delivery rate is around 7.4%, this strategy would make it possible to reduce it only to 7.0%. This small benefit must be set against the considerable effort required in terms of screening ultrasound scans. Universal ultrasound screening of cervical length is the inverse of Pareto's principle: a small benefit against a considerable effort. © 2016 Royal College of Obstetricians and Gynaecologists.

  12. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    Science.gov (United States)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  13. Improved Shape Parameter Estimation in Pareto Distributed Clutter with Neural Networks

    Directory of Open Access Journals (Sweden)

    José Raúl Machado-Fernández

    2016-12-01

    Full Text Available The main problem faced by naval radars is the elimination of the clutter input which is a distortion signal appearing mixed with target reflections. Recently, the Pareto distribution has been related to sea clutter measurements suggesting that it may provide a better fit than other traditional distributions. The authors propose a new method for estimating the Pareto shape parameter based on artificial neural networks. The solution achieves a precise estimation of the parameter, having a low computational cost, and outperforming the classic method which uses Maximum Likelihood Estimates (MLE. The presented scheme contributes to the development of the NATE detector for Pareto clutter, which uses the knowledge of clutter statistics for improving the stability of the detection, among other applications.

  14. The Burr X Pareto Distribution: Properties, Applications and VaR Estimation

    Directory of Open Access Journals (Sweden)

    Mustafa Ç. Korkmaz

    2017-12-01

    Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.

  15. Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder

    1992-01-01

    As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments......-weighted moments. Maintaining the generalized Pareto distribution as the parent exceedance distribution the T-year event is estimated assuming the exceedances to be exponentially distributed. For moderately long-tailed exceedance distributions and small to moderate sample sizes it is found, by comparing mean...... square errors of the T-year event estimators, that the exponential distribution is preferable to the correct generalized Pareto distribution despite the introduced model error and despite a possible rejection of the exponential hypothesis by a test of significance. For moderately short-tailed exceedance...

  16. Myosin-II sets the optimal response time scale of chemotactic amoeba

    Science.gov (United States)

    Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Bodenschatz, Eberhard; Beta, Carsten

    2014-03-01

    The response dynamics of the actin cytoskeleton to external chemical stimuli plays a fundamental role in numerous cellular functions. One of the key players that governs the dynamics of the actin network is the motor protein myosin-II. Here we investigate the role of myosin-II in the response of the actin system to external stimuli. We used a microfluidic device in combination with a photoactivatable chemoattractant to apply stimuli to individual cells with high temporal resolution. We directly compare the actin dynamics in Dictyostelium discodelium wild type (WT) cells to a knockout mutant that is deficient in myosin-II (MNL). Similar to the WT a small population of MNL cells showed self-sustained oscillations even in absence of external stimuli. The actin response of MNL cells to a short pulse of chemoattractant resembles WT during the first 15 sec but is significantly delayed afterward. The amplitude of the dominant peak in the power spectrum from the response time series of MNL cells to periodic stimuli with varying period showed a clear resonance peak at a forcing period of 36 sec, which is significantly delayed as compared to the resonance at 20 sec found for the WT. This shift indicates an important role of myosin-II in setting the response time scale of motile amoeba. Institute of Physics und Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam, Germany.

  17. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  18. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  19. Multi-objective reliability optimization of series-parallel systems with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Safari, Jalal

    2012-01-01

    This paper proposes a variant of the Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve a novel mathematical model for multi-objective redundancy allocation problems (MORAP). Most researchers about redundancy allocation problem (RAP) have focused on single objective optimization, while there has been some limited research which addresses multi-objective optimization. Also all mathematical multi-objective models of general RAP assume that the type of redundancy strategy for each subsystem is predetermined and known a priori. In general, active redundancy has traditionally received greater attention; however, in practice both active and cold-standby redundancies may be used within a particular system design. The choice of redundancy strategy then becomes an additional decision variable. Thus, the proposed model and solution method are to select the best redundancy strategy, type of components, and levels of redundancy for each subsystem that maximizes the system reliability and minimize total system cost under system-level constraints. This problem belongs to the NP-hard class. This paper presents a second-generation Multiple-Objective Evolutionary Algorithm (MOEA), named NSGA-II to find the best solution for the given problem. The proposed algorithm demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker (DM) with a complete picture of the optimal solution space. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front. Finally, the advantages of the presented multi-objective model and of the proposed algorithm are illustrated by solving test problems taken from the literature and the robustness of the proposed NSGA-II is discussed.

  20. Optimization of metabolite basis sets prior to quantitation in magnetic resonance spectroscopy: an approach based on quantum mechanics

    International Nuclear Information System (INIS)

    Lazariev, A; Graveron-Demilly, D; Allouche, A-R; Aubert-Frécon, M; Fauvelle, F; Piotto, M; Elbayed, K; Namer, I-J; Van Ormondt, D

    2011-01-01

    High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1 H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed