WorldWideScience

Sample records for selection optimization program

  1. Optimal selection for shielding materials by fuzzy linear programming

    International Nuclear Information System (INIS)

    Kanai, Y.; Miura, N.; Sugasawa, S.

    1996-01-01

    An application of fuzzy linear programming methods to optimization of a radiation shield is presented. The main purpose of the present study is the choice of materials and the search of the ratio of mixture-component as the first stage of the methodology on optimum shielding design according to individual requirements of nuclear reactor, reprocessing facility, shipping cask installing spent fuel, ect. The characteristic values for the shield optimization may be considered their cost, spatial space, weight and some shielding qualities such as activation rate and total dose rate for neutron and gamma ray (includes secondary gamma ray). This new approach can reduce huge combination calculations for conventional two-valued logic approaches to representative single shielding calculation by group-wised optimization parameters determined in advance. Using the fuzzy linear programming method, possibilities for reducing radiation effects attainable in optimal compositions hydrated, lead- and boron-contained materials are investigated

  2. Optimization of temperature-programmed GC separations. II. Off-line simplex optimization and column selection

    NARCIS (Netherlands)

    Snijders, H.M.J.; Janssen, J.G.M.; Cramers, C.A.M.G.; Sandra, P; Bertsch, W.; Sandra, P.; Devos, G.

    1996-01-01

    In this work a method is described which allows off-line optimization of temperature programmed GC separations. Recently, we described a new numerical method to predict off-line retention times and peak widths of a mixture containing components with known identities in capillary GC. In the present

  3. Leakage characterization of top select transistor for program disturbance optimization in 3D NAND flash

    Science.gov (United States)

    Zhang, Yu; Jin, Lei; Jiang, Dandan; Zou, Xingqi; Zhao, Zhiguo; Gao, Jing; Zeng, Ming; Zhou, Wenbin; Tang, Zhaoyun; Huo, Zongliang

    2018-03-01

    In order to optimize program disturbance characteristics effectively, a characterization approach that measures top select transistor (TSG) leakage from bit-line is proposed to quantify TSG leakage under program inhibit condition in 3D NAND flash memory. Based on this approach, the effect of Vth modulation of two-cell TSG on leakage is evaluated. By checking the dependence of leakage and corresponding program disturbance on upper and lower TSG Vth, this approach is validated. The optimal Vth pattern with high upper TSG Vth and low lower TSG Vth has been suggested for low leakage current and high boosted channel potential. It is found that upper TSG plays dominant role in preventing drain induced barrier lowering (DIBL) leakage from boosted channel to bit-line, while lower TSG assists to further suppress TSG leakage by providing smooth potential drop from dummy WL to edge of TSG, consequently suppressing trap assisted band-to-band tunneling current (BTBT) between dummy WL and TSG.

  4. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    International Nuclear Information System (INIS)

    Sutrisno; Widowati; Solikhin

    2016-01-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well. (paper)

  5. Integration of genomic information into sport horse breeding programs for optimization of accuracy of selection.

    Science.gov (United States)

    Haberland, A M; König von Borstel, U; Simianer, H; König, S

    2012-09-01

    Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.

  6. Drug efficiency: a new concept to guide lead optimization programs towards the selection of better clinical candidates.

    Science.gov (United States)

    Braggio, Simone; Montanari, Dino; Rossi, Tino; Ratti, Emiliangelo

    2010-07-01

    As a result of their wide acceptance and conceptual simplicity, drug-like concepts are having a major influence on the drug discovery process, particularly in the selection of the 'optimal' absorption, distribution, metabolism, excretion and toxicity and physicochemical parameters space. While they have an undisputable value when assessing the potential of lead series or in evaluating inherent risk of a portfolio of drug candidates, they result much less useful in weighing up compounds for the selection of the best potential clinical candidate. We introduce the concept of drug efficiency as a new tool both to guide the drug discovery program teams during the lead optimization phase and to better assess the developability potential of a drug candidate.

  7. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  8. Natural selection and optimality

    International Nuclear Information System (INIS)

    Torres, J.L.

    1989-01-01

    It is assumed that Darwin's principle translates into optimal regimes of operation along metabolical pathways in an ecological system. Fitness is then defined in terms of the distance of a given individual's thermodynamic parameters from their optimal values. The method is illustrated testing maximum power as a criterion of merit satisfied in ATP synthesis. (author). 26 refs, 2 figs

  9. Optimal selection of TLD chips

    International Nuclear Information System (INIS)

    Phung, P.; Nicoll, J.J.; Edmonds, P.; Paris, M.; Thompson, C.

    1996-01-01

    Large sets of TLD chips are often used to measure beam dose characteristics in radiotherapy. A sorting method is presented to allow optimal selection of chips from a chosen set. This method considers the variation

  10. Feature selection for portfolio optimization

    DEFF Research Database (Denmark)

    Bjerring, Thomas Trier; Ross, Omri; Weissensteiner, Alex

    2016-01-01

    Most portfolio selection rules based on the sample mean and covariance matrix perform poorly out-of-sample. Moreover, there is a growing body of evidence that such optimization rules are not able to beat simple rules of thumb, such as 1/N. Parameter uncertainty has been identified as one major....... While most of the diversification benefits are preserved, the parameter estimation problem is alleviated. We conduct out-of-sample back-tests to show that in most cases different well-established portfolio selection rules applied on the reduced asset universe are able to improve alpha relative...

  11. Optimal Contracting under Adverse Selection

    DEFF Research Database (Denmark)

    Lenells, Jonatan; Stea, Diego; Foss, Nicolai Juul

    2015-01-01

    We study a model of adverse selection, hard and soft information, and mentalizing ability--the human capacity to represent others' intentions, knowledge, and beliefs. By allowing for a continuous range of different information types, as well as for different means of acquiring information, we dev...... of that information. This strategy affects the properties of the optimal contract, which grows closer to the first best. This research provides insights into the implications of mentalizing for agency theory....

  12. [Selection of indicators for continuous monitoring of the impact of programs optimizing antimicrobial use in Primary Care].

    Science.gov (United States)

    Fernández-Urrusuno, Rocío; Flores-Dorado, Macarena; Moreno-Campoy, Eva; Montero-Balosa, M Carmen

    2015-05-01

    To determine core indicators for monitoring quality prescribing in Primary Care based on the evidence, and to assess the feasibility of these indicators for monitoring the use of antibiotics. A literature review was carried out on quality indicators for antimicrobial prescribing through an electronic search limited to the period 2001-2012. It was completed with an "ad hoc" search on the websites of public national and international health services. Finally, indicators were chosen by consensus by a multidisciplinary group of professionals dedicated to managing infections from several areas. The feasibility and applicability of these indicators was verified through the reporting and use of data in the prescription database. Twenty two indicators were found. The consensus group selected 16 indicators. Eleven of them measure the specific antimicrobial selection, and 5 are consumption rates. The indicators were successfully applied to the prescription database, being able to make comparisons between different geographical areas and to observe trends in prescriptions. The definition of a basic set of indicators to monitor antibiotic use adapted to local conditions is required. The results of these indicators can be used for feedback to professionals and for evaluating the impact of programs aimed at improving antimicrobial use. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  13. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  14. Optimal Implantable Cardioverter Defibrillator Programming.

    Science.gov (United States)

    Shah, Bindi K

    Optimal programming of implantable cardioverter defibrillators (ICDs) is essential to appropriately treat ventricular tachyarrhythmias and to avoid unnecessary and inappropriate shocks. There have been a series of large clinical trials evaluating tailored programming of ICDs. We reviewed the clinical trials evaluating ICD therapies and detection, and the consensus statement on ICD programming. In doing so, we found that prolonged ICD detection times, higher rate cutoffs, and antitachycardia pacing (ATP) programming decreases inappropriate and painful therapies in a primary prevention population. The use of supraventricular tachyarrhythmia discriminators can also decrease inappropriate shocks. Tailored ICD programming using the knowledge gained from recent ICD trials can decrease inappropriate and unnecessary ICD therapies and decrease mortality.

  15. Optimal decisions principles of programming

    CERN Document Server

    Lange, Oskar

    1971-01-01

    Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t

  16. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  17. Optimization over polynomials : Selected topics

    NARCIS (Netherlands)

    Laurent, M.; Jang, Sun Young; Kim, Young Rock; Lee, Dae-Woong; Yie, Ikkwon

    2014-01-01

    Minimizing a polynomial function over a region defined by polynomial inequalities models broad classes of hard problems from combinatorics, geometry and optimization. New algorithmic approaches have emerged recently for computing the global minimum, by combining tools from real algebra (sums of

  18. Programming for Sparse Minimax Optimization

    DEFF Research Database (Denmark)

    Jonasson, K.; Madsen, Kaj

    1994-01-01

    We present an algorithm for nonlinear minimax optimization which is well suited for large and sparse problems. The method is based on trust regions and sequential linear programming. On each iteration, a linear minimax problem is solved for a basic step. If necessary, this is followed...... by the determination of a minimum norm corrective step based on a first-order Taylor approximation. No Hessian information needs to be stored. Global convergence is proved. This new method has been extensively tested and compared with other methods, including two well known codes for nonlinear programming...

  19. Optimal set of selected uranium enrichments that minimizes blending consequences

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Lobber, J.S. Jr.

    1977-01-01

    Identities, quantities, and costs associated with producing a set of selected enrichments and blending them to provide fuel for existing reactors are investigated using an optimization model constructed with appropriate constraints. Selected enrichments are required for either nuclear reactor fuel standardization or potential uranium enrichment alternatives such as the gas centrifuge. Using a mixed-integer linear program, the model minimizes present worth costs for a 39-product-enrichment reference case. For four ingredients, the marginal blending cost is only 0.18% of the total direct production cost. Natural uranium is not an optimal blending ingredient. Optimal values reappear in most sets of ingredient enrichments

  20. Conjugate gradient optimization programs for shuttle reentry

    Science.gov (United States)

    Powers, W. F.; Jacobson, R. A.; Leonard, D. A.

    1972-01-01

    Two computer programs for shuttle reentry trajectory optimization are listed and described. Both programs use the conjugate gradient method as the optimization procedure. The Phase 1 Program is developed in cartesian coordinates for a rotating spherical earth, and crossrange, downrange, maximum deceleration, total heating, and terminal speed, altitude, and flight path angle are included in the performance index. The programs make extensive use of subroutines so that they may be easily adapted to other atmospheric trajectory optimization problems.

  1. Feature Selection via Chaotic Antlion Optimization.

    Directory of Open Access Journals (Sweden)

    Hossam M Zawbaa

    Full Text Available Selecting a subset of relevant properties from a large set of features that describe a dataset is a challenging machine learning task. In biology, for instance, the advances in the available technologies enable the generation of a very large number of biomarkers that describe the data. Choosing the more informative markers along with performing a high-accuracy classification over the data can be a daunting task, particularly if the data are high dimensional. An often adopted approach is to formulate the feature selection problem as a biobjective optimization problem, with the aim of maximizing the performance of the data analysis model (the quality of the data training fitting while minimizing the number of features used.We propose an optimization approach for the feature selection problem that considers a "chaotic" version of the antlion optimizer method, a nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. The balance between exploration of the search space and exploitation of the best solutions is a challenge in multi-objective optimization. The exploration/exploitation rate is controlled by the parameter I that limits the random walk range of the ants/prey. This variable is increased iteratively in a quasi-linear manner to decrease the exploration rate as the optimization progresses. The quasi-linear decrease in the variable I may lead to immature convergence in some cases and trapping in local minima in other cases. The chaotic system proposed here attempts to improve the tradeoff between exploration and exploitation. The methodology is evaluated using different chaotic maps on a number of feature selection datasets. To ensure generality, we used ten biological datasets, but we also used other types of data from various sources. The results are compared with the particle swarm optimizer and with genetic algorithm variants for feature selection using a set of quality metrics.

  2. Optimality Theory and Lexical Interpretation and Selection

    NARCIS (Netherlands)

    Hogeweg, L.; Legendre, G.; Putnam, M.T.; de Swart, H.; Zaroukian, E.

    2016-01-01

    This chapter argues for an optimization approach to the selection and interpretation of words. Several advantages of such an approach to lexical semantics are discussed. First of all, it will be argued that competition, entailing that words and interpretations are always judged in relation to other

  3. Optimized remedial groundwater extraction using linear programming

    International Nuclear Information System (INIS)

    Quinn, J.J.

    1995-01-01

    Groundwater extraction systems are typically installed to remediate contaminant plumes or prevent further spread of contamination. These systems are expensive to install and maintain. A traditional approach to designing such a wellfield uses a series of trial-and-error simulations to test the effects of various well locations and pump rates. However, the optimal locations and pump rates of extraction wells are difficult to determine when objectives related to the site hydrogeology and potential pumping scheme are considered. This paper describes a case study of an application of linear programming theory to determine optimal well placement and pump rates. The objectives of the pumping scheme were to contain contaminant migration and reduce contaminant concentrations while minimizing the total amount of water pumped and treated. Past site activities at the area under study included disposal of contaminants in pits. Several groundwater plumes have been identified, and others may be present. The area of concern is bordered on three sides by a wetland, which receives a portion of its input budget as groundwater discharge from the pits. Optimization of the containment pumping scheme was intended to meet three goals: (1) prevent discharge of contaminated groundwater to the wetland, (2) minimize the total water pumped and treated (cost benefit), and (3) avoid dewatering of the wetland (cost and ecological benefits). Possible well locations were placed at known source areas. To constrain the problem, the optimization program was instructed to prevent any flow toward the wetland along a user-specified border. In this manner, the optimization routine selects well locations and pump rates so that a groundwater divide is produced along this boundary

  4. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  5. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  6. Selecting Optimal Subset of Security Controls

    OpenAIRE

    Yevseyeva, I.; Basto-Fernandes, V.; Michael, Emmerich, T. M.; Moorsel, van, A.

    2015-01-01

    Open Access journal Choosing an optimal investment in information security is an issue most companies face these days. Which security controls to buy to protect the IT system of a company in the best way? Selecting a subset of security controls among many available ones can be seen as a resource allocation problem that should take into account conflicting objectives and constraints of the problem. In particular, the security of the system should be improved without hindering productivity, ...

  7. Efficient dynamic optimization of logic programs

    Science.gov (United States)

    Laird, Phil

    1992-01-01

    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering.

  8. Optimal Portfolio Selection Under Concave Price Impact

    International Nuclear Information System (INIS)

    Ma Jin; Song Qingshuo; Xu Jing; Zhang Jianfeng

    2013-01-01

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a “piecewise constant” form, reflecting a more practical perspective.

  9. Optimal Portfolio Selection Under Concave Price Impact

    Energy Technology Data Exchange (ETDEWEB)

    Ma Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States); Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk [City University of Hong Kong, Department of Mathematics (Hong Kong); Xu Jing, E-mail: xujing8023@yahoo.com.cn [Chongqing University, School of Economics and Business Administration (China); Zhang Jianfeng, E-mail: jianfenz@usc.edu [University of Southern California, Department of Mathematics (United States)

    2013-06-15

    In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.

  10. Doctoral Program Selection Using Pairwise Comparisons.

    Science.gov (United States)

    Tadisina, Suresh K.; Bhasin, Vijay

    1989-01-01

    The application of a pairwise comparison methodology (Saaty's Analytic Hierarchy Process) to the doctoral program selection process is illustrated. A hierarchy for structuring and facilitating the doctoral program selection decision is described. (Author/MLW)

  11. Quantum dot laser optimization: selectively doped layers

    Science.gov (United States)

    Korenev, Vladimir V.; Konoplev, Sergey S.; Savelyev, Artem V.; Shernyakov, Yurii M.; Maximov, Mikhail V.; Zhukov, Alexey E.

    2016-08-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics.

  12. Quantum dot laser optimization: selectively doped layers

    International Nuclear Information System (INIS)

    Korenev, Vladimir V; Konoplev, Sergey S; Savelyev, Artem V; Shernyakov, Yurii M; Maximov, Mikhail V; Zhukov, Alexey E

    2016-01-01

    Edge emitting quantum dot (QD) lasers are discussed. It has been recently proposed to use modulation p-doping of the layers that are adjacent to QD layers in order to control QD's charge state. Experimentally it has been proven useful to enhance ground state lasing and suppress the onset of excited state lasing at high injection. These results have been also confirmed with numerical calculations involving solution of drift-diffusion equations. However, deep understanding of physical reasons for such behavior and laser optimization requires analytical approaches to the problem. In this paper, under a set of assumptions we provide an analytical model that explains major effects of selective p-doping. Capture rates of elections and holes can be calculated by solving Poisson equations for electrons and holes around the charged QD layer. The charge itself is ruled by capture rates and selective doping concentration. We analyzed this self-consistent set of equations and showed that it can be used to optimize QD laser performance and to explain underlying physics. (paper)

  13. Ant colony optimization and constraint programming

    CERN Document Server

    Solnon, Christine

    2013-01-01

    Ant colony optimization is a metaheuristic which has been successfully applied to a wide range of combinatorial optimization problems. The author describes this metaheuristic and studies its efficiency for solving some hard combinatorial problems, with a specific focus on constraint programming. The text is organized into three parts. The first part introduces constraint programming, which provides high level features to declaratively model problems by means of constraints. It describes the main existing approaches for solving constraint satisfaction problems, including complete tree search

  14. Dynamic programming for QFD in PES optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sorrentino, R. [Mediterranean Univ. of Reggio Calabria, Reggio Calabria (Italy). Dept. of Computer Science and Electrical Technology

    2008-07-01

    Quality function deployment (QFD) is a method for linking the needs of the customer with design, development, engineering, manufacturing, and service functions. In the electric power industry, QFD is used to help designers concentrate on the most important technical attributes to develop better electrical services. Most optimization approaches used in QFD analysis have been based on integer or linear programming. These approaches perform well in certain circumstances, but there are problems that hinder their practical use. This paper proposed an approach to optimize Power and Energy Systems (PES). A dynamic programming approach was used along with an extended House of Quality to gather information. Dynamic programming was used to allocate the limited resources to the technical attributes. The approach integrated dynamic programming into the electrical service design process. The dynamic programming approach did not require the full relationship curve between technical attributes and customer satisfaction, or the relationship between technical attributes and cost. It only used a group of discrete points containing information about customer satisfaction, technical attributes, and the cost to find the optimal product design. Therefore, it required less time and resources than other approaches. At the end of the optimization process, the value of each technical attribute, the related cost, and the overall customer satisfaction were obtained at the same time. It was concluded that compared with other optimization methods, the dynamic programming method requires less information and the optimal results are more relevant. 21 refs., 2 tabs., 2 figs.

  15. Graphic Interface for LCP2 Optimization Program

    DEFF Research Database (Denmark)

    Nicolae, Taropa Laurentiu; Gaunholt, Hans

    1998-01-01

    This report provides information about the software interface that is programmed for the Optimization Program LCP2. The first part is about the general description of the program followed by a guide for using the interface. The last chapters contain a discussion about problems or futute extension...... of the project. The program is written in Visual C++5.0 on a Windows NT4.0 operating system.......This report provides information about the software interface that is programmed for the Optimization Program LCP2. The first part is about the general description of the program followed by a guide for using the interface. The last chapters contain a discussion about problems or futute extensions...

  16. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S.; Milby, M.; Baker, J.

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR(R) (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for fifteen Chicagoland single family housing archetypes, called housing groups. In the present study, 800 IHP homes are first matched to one of these fifteen housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost-effectiveness.

  17. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Milby, M. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Baker, J. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States)

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR® (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for 15 Chicagoland single family housing archetypes. In the present study, 800 IHP homes are first matched to one of these 15 housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost effectiveness.

  18. Optimal portfolio selection between different kinds of Renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Zakerinia, MohammadSaleh; Piltan, Mehdi; Ghaderi, Farid

    2010-09-15

    In this paper, selection of the optimal energy supply system in an industrial unit is taken into consideration. This study takes environmental, economical and social parameters into consideration in modeling along with technical factors. Several alternatives which include renewable energy sources, micro-CHP systems and conventional system has been compared by means of an integrated model of linear programming and three multi-criteria approaches (AHP, TOPSIS and ELECTRE III). New parameters like availability of sources, fuels' price volatility, besides traditional factors are considered in different scenarios. Results show with environmental preferences, renewable sources and micro-CHP are good alternatives for conventional systems.

  19. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    Science.gov (United States)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  20. Optimization of biotechnological systems through geometric programming

    Directory of Open Access Journals (Sweden)

    Torres Nestor V

    2007-09-01

    Full Text Available Abstract Background In the past, tasks of model based yield optimization in metabolic engineering were either approached with stoichiometric models or with structured nonlinear models such as S-systems or linear-logarithmic representations. These models stand out among most others, because they allow the optimization task to be converted into a linear program, for which efficient solution methods are widely available. For pathway models not in one of these formats, an Indirect Optimization Method (IOM was developed where the original model is sequentially represented as an S-system model, optimized in this format with linear programming methods, reinterpreted in the initial model form, and further optimized as necessary. Results A new method is proposed for this task. We show here that the model format of a Generalized Mass Action (GMA system may be optimized very efficiently with techniques of geometric programming. We briefly review the basics of GMA systems and of geometric programming, demonstrate how the latter may be applied to the former, and illustrate the combined method with a didactic problem and two examples based on models of real systems. The first is a relatively small yet representative model of the anaerobic fermentation pathway in S. cerevisiae, while the second describes the dynamics of the tryptophan operon in E. coli. Both models have previously been used for benchmarking purposes, thus facilitating comparisons with the proposed new method. In these comparisons, the geometric programming method was found to be equal or better than the earlier methods in terms of successful identification of optima and efficiency. Conclusion GMA systems are of importance, because they contain stoichiometric, mass action and S-systems as special cases, along with many other models. Furthermore, it was previously shown that algebraic equivalence transformations of variables are sufficient to convert virtually any types of dynamical models into

  1. Portfolio optimization using fuzzy linear programming

    Science.gov (United States)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  2. Optimal selection of biochars for remediating metals ...

    Science.gov (United States)

    Approximately 500,000 abandoned mines across the U.S. pose a considerable, pervasive risk to human health and the environment due to possible exposure to the residuals of heavy metal extraction. Historically, a variety of chemical and biological methods have been used to reduce the bioavailability of the metals at mine sites. Biochar with its potential to complex and immobilize heavy metals, is an emerging alternative for reducing bioavailability. Furthermore, biochar has been reported to improve soil conditions for plant growth and can be used for promoting the establishment of a soil-stabilizing native plant community to reduce offsite movement of metal-laden waste materials. Because biochar properties depend upon feedstock selection, pyrolysis production conditions, and activation procedures used, they can be designed to meet specific remediation needs. As a result biochar with specific properties can be produced to correspond to specific soil remediation situations. However, techniques are needed to optimally match biochar characteristics with metals contaminated soils to effectively reduce metal bioavailability. Here we present experimental results used to develop a generalized method for evaluating the ability of biochar to reduce metals in mine spoil soil from an abandoned Cu and Zn mine. Thirty-eight biochars were produced from approximately 20 different feedstocks and produced via slow pyrolysis or gasification, and were allowed to react with a f

  3. Diet selection of African elephant over time shows changing optimization currency

    NARCIS (Netherlands)

    Pretorius, Y.; Stigter, J.D.; Boer, de W.F.; Wieren, van S.E.; Jong, de C.B.; Knegt, de H.J.; Grant, R.C.; Heitkonig, I.M.A.; Knox, N.; Kohi, E.; Mwakiwa, E.; Peel, M.J.S.; Skidmore, A.K.; Slotow, R.; Waal, van der C.; Langevelde, van F.; Prins, H.H.T.

    2012-01-01

    Multiple factors determine diet selection of herbivores. However, in many diet studies selection of single nutrients is studied or optimization models are developed using only one currency. In this paper, we use linear programming to explain diet selection by African elephant based on plant

  4. Optimal installation program for reprocessing plants

    International Nuclear Information System (INIS)

    Kubokawa, Toshihiko; Kiyose, Ryohei

    1976-01-01

    Optimization of the program of installation of reprocessing plants is mathematically formulated as problem of mixed integer programming, which is numerically solved by the branch-and-bound method. A new concept of quasi-penalty is used to obviate the difficulties associated with dual degeneracy. The finiteness of the useful life of the plant is also taken into consideration. It is shown that an analogous formulation is possible for the cases in which the demand forecasts and expected plant lives cannot be predicted with certainty. The scale of the problem is found to have kN binary variables, (k+2)N continuous variables, and (k+3)N constraint conditions, where k is the number of intervals used in the piece-wise linear approximation of a nonlinear objective function, and N the overall duration of the period covered by the installation program. Calculations are made for N=24 yr and k=3, with the assumption that the plant life is 15 yr, the plant scale factor 0.5, and the maximum plant capacity 900 (t/yr). The results are calculated and discussed for four different demand forecasts. The difference of net profit between optimal and non-optimal installation programs is found to be in the range of 50 -- 100 M$. The pay-off matrix is calculated, and the optimal choice of action when the demand cannot be forecast with certainty is determined by applying Bayes' theory. The optimal installation program under such conditions of uncertainty is obtained also with a stochastic mixed integer programming model. (auth.)

  5. Propositional Optimal Trajectory Programming for Improving Stability ...

    African Journals Online (AJOL)

    Propositional Optimal Trajectory Programming for Improving Stability of Hermite Definite Control System. ... PROMOTING ACCESS TO AFRICAN RESEARCH. AFRICAN JOURNALS ONLINE (AJOL) ... Knowledge of systems operation subjected to heat diffusion constraints is required of systems analysts. In an instance that ...

  6. Optimization of Product Instantiation using Integer Programming

    NARCIS (Netherlands)

    van den Broek, P.M.; Botterweck, Goetz; Jarzabek, Stan; Kishi, Tomoji

    2010-01-01

    We show that Integer Programming (IP) can be used as an optimization technique for the instantiation of products of feature models. This is done by showing that the constraints of feature models can be written in linear form. As particular IP technique, we use Gomory cutting planes. We have applied

  7. Grid-Optimization Program for Photovoltaic Cells

    Science.gov (United States)

    Daniel, R. E.; Lee, T. S.

    1986-01-01

    CELLOPT program developed to assist in designing grid pattern of current-conducting material on photovoltaic cell. Analyzes parasitic resistance losses and shadow loss associated with metallized grid pattern on both round and rectangular solar cells. Though performs sensitivity studies, used primarily to optimize grid design in terms of bus bar and grid lines by minimizing power loss. CELLOPT written in APL.

  8. Programmed Evolution for Optimization of Orthogonal Metabolic Output in Bacteria

    Science.gov (United States)

    Eckdahl, Todd T.; Campbell, A. Malcolm; Heyer, Laurie J.; Poet, Jeffrey L.; Blauch, David N.; Snyder, Nicole L.; Atchley, Dustin T.; Baker, Erich J.; Brown, Micah; Brunner, Elizabeth C.; Callen, Sean A.; Campbell, Jesse S.; Carr, Caleb J.; Carr, David R.; Chadinha, Spencer A.; Chester, Grace I.; Chester, Josh; Clarkson, Ben R.; Cochran, Kelly E.; Doherty, Shannon E.; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M.; Evans, Rebecca A.; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L.; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L.; Keffeler, Erica C.; Lantz, Andrew J.; Lim, Jonathan N.; McGuire, Erin P.; Moore, Alexander K.; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A.; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E.; Polpityaarachchige, Sachith; Quaney, Michael J.; Slattery, Abagael; Smith, Kathryn E.; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J.; Whitesides, E. Tucker

    2015-01-01

    Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields – evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy

  9. Programmed evolution for optimization of orthogonal metabolic output in bacteria.

    Directory of Open Access Journals (Sweden)

    Todd T Eckdahl

    Full Text Available Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields - evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in

  10. Optimizing antibiotic selection in treating COPD exacerbations

    Directory of Open Access Journals (Sweden)

    Attiya Siddiqi

    2008-03-01

    Full Text Available Attiya Siddiqi, Sanjay SethiDivision of Pulmonary, Critical Care and Sleep Medicine, Department of Medicine, Veterans Affairs Western New York Health Care System and University of Buffalo, State University of New York, Buffalo, New York, USAAbstract: Our understanding of the etiology, pathogenesis and consequences of acute exacerbations of chronic obstructive pulmonary disease (COPD has increased substantially in the last decade. Several new lines of evidence demonstrate that bacterial isolation from sputum during acute exacerbation in many instances reflects a cause-effect relationship. Placebo-controlled antibiotic trials in exacerbations of COPD demonstrate significant clinical benefits of antibiotic treatment in moderate and severe episodes. However, in the multitude of antibiotic comparison trials, the choice of antibiotics does not appear to affect the clinical outcome, which can be explained by several methodological limitations of these trials. Recently, comparison trials with nontraditional end-points have shown differences among antibiotics in the treatment of exacerbations of COPD. Observational studies that have examined clinical outcome of exacerbations have repeatedly demonstrated certain clinical characteristics to be associated with treatment failure or early relapse. Optimal antibiotic selection for exacerbations has therefore incorporated quantifying the risk for a poor outcome of the exacerbation and choosing antibiotics differently for low risk and high risk patients, reserving the broader spectrum drugs for the high risk patients. Though improved outcomes in exacerbations with antibiotic choice based on such risk stratification has not yet been demonstrated in prospective controlled trials, this approach takes into account concerns of disease heterogeneity, antibiotic resistance and judicious antibiotic use in exacerbations.Keywords: COPD, exacerbation, bronchitis, antibiotics

  11. optimal selection of hydraulic turbines for small hydro electric power

    African Journals Online (AJOL)

    eobe

    Keywords: optimal selection, SHP turbine, flow duration curve, energy efficiency, annual capacity factor. 1. INTRODUCTION ... depleted, with adverse environmental impacts downstream ..... Technologies, Financing Cogeneration and Small -.

  12. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  13. Defense Acquisitions: Assessments of Selected Weapon Programs

    Science.gov (United States)

    2017-03-01

    Figure 17: Examples of Knowledge Scorecards 61 Page vi GAO-17-333SP Assessments of Selected Weapon Programs...programs. Page 61 GAO-17-333SP Assessments of Selected Weapon Programs Figure 17: Examples of Knowledge Scorecards Pursuant to a...had direct access to the USD AT&L and other senior acquisition officials, and some approval authorities were delegated to lower levels. For example

  14. Pareto optimization in algebraic dynamic programming.

    Science.gov (United States)

    Saule, Cédric; Giegerich, Robert

    2015-01-01

    Pareto optimization combines independent objectives by computing the Pareto front of its search space, defined as the set of all solutions for which no other candidate solution scores better under all objectives. This gives, in a precise sense, better information than an artificial amalgamation of different scores into a single objective, but is more costly to compute. Pareto optimization naturally occurs with genetic algorithms, albeit in a heuristic fashion. Non-heuristic Pareto optimization so far has been used only with a few applications in bioinformatics. We study exact Pareto optimization for two objectives in a dynamic programming framework. We define a binary Pareto product operator [Formula: see text] on arbitrary scoring schemes. Independent of a particular algorithm, we prove that for two scoring schemes A and B used in dynamic programming, the scoring scheme [Formula: see text] correctly performs Pareto optimization over the same search space. We study different implementations of the Pareto operator with respect to their asymptotic and empirical efficiency. Without artificial amalgamation of objectives, and with no heuristics involved, Pareto optimization is faster than computing the same number of answers separately for each objective. For RNA structure prediction under the minimum free energy versus the maximum expected accuracy model, we show that the empirical size of the Pareto front remains within reasonable bounds. Pareto optimization lends itself to the comparative investigation of the behavior of two alternative scoring schemes for the same purpose. For the above scoring schemes, we observe that the Pareto front can be seen as a composition of a few macrostates, each consisting of several microstates that differ in the same limited way. We also study the relationship between abstract shape analysis and the Pareto front, and find that they extract information of a different nature from the folding space and can be meaningfully combined.

  15. Counselor Education Abroad: Selected Programs

    Science.gov (United States)

    Donn, Patsy A.; Hollis, Joseph W.

    1972-01-01

    This article discusses the current status of counselor education programs being operated for the benefit of military personnel and military dependents abroad. A major issue examined is the apparent inaccuracy of the stereotype of the professional military man as an individual unable to learn or present facilitative dimensions. (Author)

  16. Optimization of the annual construction program solutions

    Directory of Open Access Journals (Sweden)

    Oleinik Pavel

    2017-01-01

    Full Text Available The article considers potentially possible optimization solutions in scheduling while forming the annual production programs of the construction complex organizations. The optimization instrument is represented as a two-component system. As a fundamentally new approach in the first block of the annual program solutions, the authors propose to use a scientifically grounded methodology for determining the scope of work permissible for the transfer to a subcontractor without risk of General Contractor’s management control losing over the construction site. For this purpose, a special indicator is introduced that characterizes the activity of the general construction organization - the coefficient of construction production management. In the second block, the principal methods for the formation of calendar plans for the fulfillment of the critical work effort by the leading stream are proposed, depending on the intensity characteristic.

  17. Markdown Optimization via Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Cos?gun

    2013-02-01

    Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.

  18. Industrial cogeneration optimization program. Final report, September 1979

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Jerry; McWhinney, Jr., Robert T.

    1980-01-01

    This study program is part of the DOE Integrated Industry Cogeneration Program to optimize, evaluate, and demonstrate cogeneration systems, with direct participation of the industries most affected. One objective is to characterize five major energy-intensive industries with respect to their energy-use profiles. The industries are: petroleum refining and related industries, textile mill products, paper and allied products, chemicals and allied products, and food and kindred products. Another objective is to select optimum cogeneration systems for site-specific reference case plants in terms of maximum energy savings subject to given return on investment hurdle rates. Analyses were made that define the range of optimal cogeneration systems for each reference-case plant considering technology applicability, economic factors, and energy savings by type of fuel. This study also provides guidance to other parts of the program through information developed with regard to component development requirements, institutional and regulatory barriers, as well as fuel use and environmental considerations. (MCW)

  19. Software for selection of optimal layouts of fast reactors

    International Nuclear Information System (INIS)

    Geraskin, N.I.; Kuz'min, A.M.; Morin, D.V.

    1983-01-01

    A complex program for the calculation and optimization of a two-dimensional cylindrical fast reactor consisting of two axial layers and having up to 10 zones of different compositions in each layer is described. Search for optimal parameters is performed by the successive linearization method based on the small perturbation theory and linear programming. The complex program is written for the BESM-6 computer in the FORTRAN language

  20. Probabilistic methods for maintenance program optimization

    International Nuclear Information System (INIS)

    Liming, J.K.; Smith, M.J.; Gekler, W.C.

    1989-01-01

    In today's regulatory and economic environments, it is more important than ever that managers, engineers, and plant staff join together in developing and implementing effective management plans for safety and economic risk. This need applied to both power generating stations and other process facilities. One of the most critical parts of these management plans is the development and continuous enhancement of a maintenance program that optimizes plant or facility safety and profitability. The ultimate objective is to maximize the potential for station or facility success, usually measured in terms of projected financial profitability, while meeting or exceeding meaningful and reasonable safety goals, usually measured in terms of projected damage or consequence frequencies. This paper describes the use of the latest concepts in developing and evaluating maintenance programs to achieve maintenance program optimization (MPO). These concepts are based on significant field experience gained through the integration and application of fundamentals developed for industry and Electric Power Research Institute (EPRI)-sponsored projects on preventive maintenance (PM) program development and reliability-centered maintenance (RCM)

  1. Selective Placement Program Coordinator (SPPC) Directory

    Data.gov (United States)

    Office of Personnel Management — List of the Selective Placement Program Coordinators (SPPC) in Federal agencies, updated as needed. Users can filter the list by choosing a state and/or agency name.

  2. Hydro-economic optimization model for selecting least cost programs of measures at the river basin scale. Application to the implementation of the EU Water Framework Directive on the Orb river basin (France).

    Science.gov (United States)

    Girard, C.; Rinaudo, J. D.; Caballero, Y.; Pulido-Velazquez, M.

    2012-04-01

    This article presents a case study which illustrates how an integrated hydro-economic model can be applied to optimize a program of measures (PoM) at the river basin level. By allowing the integration of hydrological, environmental and economic aspects at a local scale, this model is indeed useful to assist water policy decision making processes. The model identifies the least cost PoM to satisfy the predicted 2030 urban and agricultural water demands while meeting the in-stream flow constraints. The PoM mainly consists of water saving and conservation measures at the different demands. It includes as well some measures mobilizing additional water resources coming from groundwater, inter-basin transfers and improvement in reservoir operating rules. The flow constraints are defined to ensure a good status of the surface water bodies, as defined by the EU Water Framework Directive (WFD). The case study is conducted in the Orb river basin, a coastal basin in Southern France. It faces a significant population growth, changes in agricultural patterns and limited water resources. It is classified at risk of not meeting the good status by 2015. Urban demand is calculated by type of water users at municipality level in 2006 and projected to 2030 with user specific scenarios. Agricultural water demand is estimated at irrigation district (canton) level in 2000 and projected to 2030 under three agricultural development scenarios. The total annual cost of each measure has been calculated taken into account operation and maintenance costs as well as investment cost. A first optimization model was developed using GAMS, General Algebraic Modeling System, applying Mixed Integer Linear Programming. The optimization is run to select the set of measures that minimizes the objective function, defined as the total cost of the applied measures, while meeting the demands and environmental constraints (minimum in-stream flows) for the 2030 time horizon. The first result is an optimized Po

  3. Epidemiologic research program: Selected bibliography

    International Nuclear Information System (INIS)

    1993-05-01

    This bibliography is a current listing of scientific reports from epidemiologic and related activities sponsored by the Department of Energy. The Office of Epidemiology and Health Surveillance now is the departmental focal point for these activities and any others relating to the study of human health effects. The Office's mission is evolving to encompass the new role of the Department in environmental restoration, weapons dismantlement and nuclear material storage, and development of new energy technologies. Publications in these areas will be included in future editions of the bibliography. The present edition brings the listing up to date, and should facilitate access to specific reports. The program has been divided into several general areas of activity: the Radiation Effects Research Foundation, which supports studies of survivors of the atomic weapons in Hiroshima and Nagasaki; mortality and morbidity studies of DOE workers; studies on internally deposited alpha emitters; medical/histologic studies; studies on the genetic aspects of radiation damage; community health surveillance studies; and the development of computational techniques and of databases to make the results as widely useful as possible

  4. Exploration of automatic optimization for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez; Khan, Ayaz ul Hassan

    2012-01-01

    Graphic processing Units (GPUs) are gaining ground in high-performance computing. CUDA (an extension to C) is most widely used parallel programming framework for general purpose GPU computations. However, the task of writing optimized CUDA program is complex even for experts. We present a method for restructuring loops into an optimized CUDA kernels based on a 3-step algorithm which are loop tiling, coalesced memory access, and resource optimization. We also establish the relationships between the influencing parameters and propose a method for finding possible tiling solutions with coalesced memory access that best meets the identified constraints. We also present a simplified algorithm for restructuring loops and rewrite them as an efficient CUDA Kernel. The execution model of synthesized kernel consists of uniformly distributing the kernel threads to keep all cores busy while transferring a tailored data locality which is accessed using coalesced pattern to amortize the long latency of the secondary memory. In the evaluation, we implement some simple applications using the proposed restructuring strategy and evaluate the performance in terms of execution time and GPU throughput. © 2012 IEEE.

  5. Exploration of automatic optimization for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez

    2012-12-01

    Graphic processing Units (GPUs) are gaining ground in high-performance computing. CUDA (an extension to C) is most widely used parallel programming framework for general purpose GPU computations. However, the task of writing optimized CUDA program is complex even for experts. We present a method for restructuring loops into an optimized CUDA kernels based on a 3-step algorithm which are loop tiling, coalesced memory access, and resource optimization. We also establish the relationships between the influencing parameters and propose a method for finding possible tiling solutions with coalesced memory access that best meets the identified constraints. We also present a simplified algorithm for restructuring loops and rewrite them as an efficient CUDA Kernel. The execution model of synthesized kernel consists of uniformly distributing the kernel threads to keep all cores busy while transferring a tailored data locality which is accessed using coalesced pattern to amortize the long latency of the secondary memory. In the evaluation, we implement some simple applications using the proposed restructuring strategy and evaluate the performance in terms of execution time and GPU throughput. © 2012 IEEE.

  6. Project STOP (Spectral Thermal Optimization Program)

    Science.gov (United States)

    Goldhammer, L. J.; Opjorden, R. W.; Goodelle, G. S.; Powe, J. S.

    1977-01-01

    The spectral thermal optimization of solar cell configurations for various solar panel applications is considered. The method of optimization depends upon varying the solar cell configuration's optical characteristics to minimize panel temperatures, maximize power output and decrease the power delta from beginning of life to end of life. Four areas of primary investigation are: (1) testing and evaluation of ultraviolet resistant coverslide adhesives, primarily FEP as an adhesive; (2) examination of solar cell absolute spectral response and corresponding cell manufacturing processes that affect it; (3) experimental work with solar cell manufacturing processes that vary cell reflectance (solar absorptance); and (4) experimental and theoretical studies with various coverslide filter designs, mainly a red rejection filter. The Hughes' solar array prediction program has been modified to aid in evaluating the effect of each of the above four areas on the output of a solar panel in orbit.

  7. OPTIMIZING ANTIMICROBIAL PHARMACODYNAMICS: A GUIDE FOR YOUR STEWARDSHIP PROGRAM

    Directory of Open Access Journals (Sweden)

    Joseph L. Kuti, PharmD

    2016-09-01

    Full Text Available Pharmacodynamic concepts should be applied to optimize antibiotic dosing regimens, particularly in the face of some multidrug resistant bacterial infections. Although the pharmacodynamics of most antibiotic classes used in the hospital setting are well described, guidance on how to select regimens and implement them into an antimicrobial stewardship program in one's institution are more limited. The role of the antibiotic MIC is paramount in understanding which regimens might benefit from implementation as a protocol or use in individual patients. This review article outlines the pharmacodynamics of aminoglycosides, beta-lactams, fluoroquinolones, tigecycline, vancomycin, and polymyxins with the goal of providing a basis for strategy to select an optimized antibiotic regimen in your hospital setting.

  8. Selection on Optimal Haploid Value Increases Genetic Gain and Preserves More Genetic Diversity Relative to Genomic Selection.

    Science.gov (United States)

    Daetwyler, Hans D; Hayden, Matthew J; Spangenberg, German C; Hayes, Ben J

    2015-08-01

    Doubled haploids are routinely created and phenotypically selected in plant breeding programs to accelerate the breeding cycle. Genomic selection, which makes use of both phenotypes and genotypes, has been shown to further improve genetic gain through prediction of performance before or without phenotypic characterization of novel germplasm. Additional opportunities exist to combine genomic prediction methods with the creation of doubled haploids. Here we propose an extension to genomic selection, optimal haploid value (OHV) selection, which predicts the best doubled haploid that can be produced from a segregating plant. This method focuses selection on the haplotype and optimizes the breeding program toward its end goal of generating an elite fixed line. We rigorously tested OHV selection breeding programs, using computer simulation, and show that it results in up to 0.6 standard deviations more genetic gain than genomic selection. At the same time, OHV selection preserved a substantially greater amount of genetic diversity in the population than genomic selection, which is important to achieve long-term genetic gain in breeding populations. Copyright © 2015 by the Genetics Society of America.

  9. Optimal materials selection for bimaterial piezoelectric microactuators

    OpenAIRE

    Srinivasan, P.; Spearing, S.M.

    2008-01-01

    Piezoelectric actuation is one of the commonly employed actuation schemes in microsystems. This paper focuses on identifying and ranking promising active material/substrate combinations for bimaterial piezoelectric (BPE) microactuators based on their performance. The mechanics of BPE structures following simple beam theory assumptions available in the literature are applied to evolve critical performance metrics which govern the materials selection process. Contours of equal performance are p...

  10. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  11. Investigating the Optimal Management Strategy for a Healthcare Facility Maintenance Program

    National Research Council Canada - National Science Library

    Gaillard, Daria

    2004-01-01

    ...: strategic partnering with an equipment management firm. The objective of this study is to create a decision-model for selecting the optimal management strategy for a healthcare organization's facility maintenance program...

  12. Waste system optimization - can diameter selection

    International Nuclear Information System (INIS)

    Ashline, R.C.

    1983-08-01

    The purpose of the waste system optimization study is to define in terms of cost incentives the preferred waste package for HLW which has been converted to glass at a commercial reprocessing plant. The Waste Management Economic Model (WMEM) was employed to analyze the effect of varying important design parameters on the overall net present cost of waste handling. The parameters found to have the greatest effect on the calculated overall net present cost were can diameter, repository type (salt, basalt/bentonite, or welded tuff), allowable areal heat loading, and the repository availability date. The overall net present of a waste handling option is calculated over a 20-year operating period. It includes the total capital and operating costs associated with high-level and intermediate-level liquid waste storage, liquid waste solidification, hulls storage and compaction, and general process trash handling. It also includes the cask leasing and transportation costs associated with each waste type and the waste repository disposal costs. The waste repository disposal costs used in WMEM for this analysis were obtained from Battelle Pacific Northwest Laboratories and thir RECON model. 2 figures, 2 tables

  13. Computer program for optimal BWR congtrol rod programming

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Carmody, J.M.

    1995-01-01

    A fully automated computer program has been developed for designing optimal control rod (CR) patterns for boiling water reactors (BWRs). The new program, called OCTOPUS-3, is based on the OCTOPUS code and employs SIMULATE-3 (Ref. 2) for the analysis. There are three aspects of OCTOPUS-3 that make it successful for use at PECO Energy. It incorporates a new feasibility algorithm that makes the CR design meet all constraints, it has been coupled to a Bourne Shell program 3 to allow the user to run the code interactively without the need for a manual, and it develops a low axial peak to extend the cycle. For PECO Energy Co.'s limericks it increased the energy output by 1 to 2% over the traditional PECO Energy design. The objective of the optimization in OCTOPUS-3 is to approximate a very low axial peaked target power distribution while maintaining criticality, keeping the nodal and assembly peaks below the allowed maximum, and meeting the other constraints. The user-specified input for each exposure point includes: CR groups allowed-to-move, target k eff , and amount of core flow. The OCTOPUS-3 code uses the CR pattern from the previous step as the initial guess unless indicated otherwise

  14. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  15. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  16. Optimal Diet Planning for Eczema Patient Using Integer Programming

    Science.gov (United States)

    Zhen Sheng, Low; Sufahani, Suliadi

    2018-04-01

    Human diet planning is conducted by choosing appropriate food items that fulfill the nutritional requirements into the diet formulation. This paper discusses the application of integer programming to build the mathematical model of diet planning for eczema patients. The model developed is used to solve the diet problem of eczema patients from young age group. The integer programming is a scientific approach to select suitable food items, which seeks to minimize the costs, under conditions of meeting desired nutrient quantities, avoiding food allergens and getting certain foods into the diet that brings relief to the eczema conditions. This paper illustrates that the integer programming approach able to produce the optimal and feasible solution to deal with the diet problem of eczema patient.

  17. A program package for solving linear optimization problems

    International Nuclear Information System (INIS)

    Horikami, Kunihiko; Fujimura, Toichiro; Nakahara, Yasuaki

    1980-09-01

    Seven computer programs for the solution of linear, integer and quadratic programming (four programs for linear programming, one for integer programming and two for quadratic programming) have been prepared and tested on FACOM M200 computer, and auxiliary programs have been written to make it easy to use the optimization program package. The characteristics of each program are explained and the detailed input/output descriptions are given in order to let users know how to use them. (author)

  18. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    Science.gov (United States)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  19. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  20. Optimal infrastructure selection to boost regional sustainable economy

    OpenAIRE

    Martín Utrillas, Manuel Guzmán; Juan-Garcia, F.; Cantó Perelló, Julián; Curiel Esparza, Jorge

    2015-01-01

    The role of infrastructures in boosting the economic growth of the regions is widely recognized. In many cases, an infrastructure is selected by subjective reasons. Selection of the optimal infrastructure for sustainable economic development of a region should be based on objective and founded reasons, not only economical, but also environmental and social. In this paper is developed such selection through a hybrid method based on Delphi, analytical hierarchy process (AHP), and VIKOR (from Se...

  1. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  2. Optimizing the hydraulic program of cementing casing strings

    Energy Technology Data Exchange (ETDEWEB)

    Novakovic, M

    1984-01-01

    A technique is described for calculating the optimal parameters of the flow of plugging mud which takes into consideration the geometry of the annular space and the rheological characteristics of the muds. The optimization algorithm was illustrated by a block diagram. Examples are given for practical application of the optimization programs in production conditions. It is stressed that optimizing the hydraulic cementing program is effective if other technical-technological problems in cementing casing strings have been resolved.

  3. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Program Characteristics Influencing Allopathic Students' Residency Selection.

    Science.gov (United States)

    Stillman, Michael D; Miller, Karen Hughes; Ziegler, Craig H; Upadhyay, Ashish; Mitchell, Charlene K

    2016-04-01

    Medical students must consider many overt variables when entering the National Resident Matching Program. However, changes with the single graduate medical education accreditation system have caused a gap in knowledge about more subtle considerations, including what, if any, influence the presence of osteopathic physician (ie, DO) and international medical graduate (IMG) house officers has on allopathic students' residency program preferences. Program directors and selection committee members may assume students' implicit bias without substantiating evidence. To reexamine which program characteristics affect US-trained allopathic medical students' residency selection, and to determine whether the presence of DO and IMG house officers affects the program choices of allopathic medical students. Fourth-year medical students from 4 allopathic medical schools completed an online survey. The Pearson χ(2) statistic was used to compare demographic and program-specific traits that influence ranking decisions and to determine whether school type (private vs public), valuing a residency program's prestige, or interest in a competitive specialty dictated results. Qualitative data were analyzed using the Pandit variation of the Glaser and Strauss constant comparison. Surveys were completed by 323 of 577 students (56%). Students from private vs public institutions were more likely to value a program's prestige (160 [93%] vs 99 [72%]; P<.001) and research opportunities (114 [66%] vs 57 [42%]; P<.001), and they were less likely to consider their prospects of being accepted (98 [57%] vs 111 [81%]; P<.001). A total of 33 (10%) and 52 (16%) students reported that the presence of DO or IMG trainees, respectively, would influence their final residency selection, and these percentages were largely unchanged among students interested in programs' prestige or in entering a competitive specialty. Open-ended comments were generally optimistic about diversification of the physician

  5. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  6. SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression

    OpenAIRE

    Flores, Salvador

    2015-01-01

    This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a ...

  7. Dermatology Residency Selection Criteria with an Emphasis on Program Characteristics: A National Program Director Survey

    Directory of Open Access Journals (Sweden)

    Farzam Gorouhi

    2014-01-01

    Full Text Available Background. Dermatology residency programs are relatively diverse in their resident selection process. The authors investigated the importance of 25 dermatology residency selection criteria focusing on differences in program directors’ (PDs’ perception based on specific program demographics. Methods. This cross-sectional nationwide observational survey utilized a 41-item questionnaire that was developed by literature search, brainstorming sessions, and online expert reviews. The data were analyzed utilizing the reliability test, two-step clustering, and K-means methods as well as other methods. The main purpose of this study was to investigate the differences in PDs’ perception regarding the importance of the selection criteria based on program demographics. Results. Ninety-five out of 114 PDs (83.3% responded to the survey. The top five criteria for dermatology residency selection were interview, letters of recommendation, United States Medical Licensing Examination Step I scores, medical school transcripts, and clinical rotations. The following criteria were preferentially ranked based on different program characteristics: “advanced degrees,” “interest in academics,” “reputation of undergraduate and medical school,” “prior unsuccessful attempts to match,” and “number of publications.” Conclusions. Our survey provides up-to-date factual data on dermatology PDs’ perception in this regard. Dermatology residency programs may find the reported data useful in further optimizing their residency selection process.

  8. Programs To Optimize Spacecraft And Aircraft Trajectories

    Science.gov (United States)

    Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.

    1994-01-01

    POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).

  9. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  10. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  11. Optimal portfolio selection for general provisioning and terminal wealth problems

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2010-01-01

    In Dhaene et al. (2005), multiperiod portfolio selection problems are discussed, using an analytical approach to find optimal constant mix investment strategies in a provisioning or a savings context. In this paper we extend some of these results, investigating some specific, real-life situations.

  12. Optimal portfolio selection for general provisioning and terminal wealth problems

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2009-01-01

    In Dhaene et al. (2005), multiperiod portfolio selection problems are discussed, using an analytical approach to find optimal constant mix investment strategies in a provisioning or savings context. In this paper we extend some of these results, investigating some specific, real-life situations. The

  13. Selection and optimization of extracellular lipase production using ...

    African Journals Online (AJOL)

    The aim of this study was to isolate and select lipase-producing microorganisms originated from different substrates, as well as to optimize the production of microbial lipase by submerged fermentation under different nutrient conditions. Of the 40 microorganisms isolated, 39 showed a halo around the colonies and 4 were ...

  14. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  15. Comparison of Optimal Portfolios Selected by Multicriterial Model Using Absolute and Relative Criteria Values

    Directory of Open Access Journals (Sweden)

    Branka Marasović

    2009-03-01

    Full Text Available In this paper we select an optimal portfolio on the Croatian capital market by using the multicriterial programming. In accordance with the modern portfolio theory maximisation of returns at minimal risk should be the investment goal of any successful investor. However, contrary to the expectations of the modern portfolio theory, the tests carried out on a number of financial markets reveal the existence of other indicators important in portfolio selection. Considering the importance of variables other than return and risk, selection of the optimal portfolio becomes a multicriterial problem which should be solved by using the appropriate techniques.In order to select an optimal portfolio, absolute values of criteria, like return, risk, price to earning value ratio (P/E, price to book value ratio (P/B and price to sale value ratio (P/S are included in our multicriterial model. However the problem might occur as the mean values of some criteria are significantly different for different sectors and because financial managers emphasize that comparison of the same criteria for different sectors could lead us to wrong conclusions. In the second part of the paper, relative values of previously stated criteria (in relation to mean value of sector are included in model for selecting optimal portfolio. Furthermore, the paper shows that if relative values of criteria are included in multicriterial model for selecting optimal portfolio, return in subsequent period is considerably higher than if absolute values of the same criteria were used.

  16. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  17. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    Science.gov (United States)

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  18. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  19. Research on numerical method for multiple pollution source discharge and optimal reduction program

    Science.gov (United States)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  20. Optimal tariff design under consumer self-selection

    Energy Technology Data Exchange (ETDEWEB)

    Raesaenen, M.; Ruusunen, J.; Haemaelaeinen, R.

    1995-12-31

    This report considers the design of electricity tariffs which guides an individual consumer to select the tariff designed for his consumption pattern. In the model the utility maximizes the weighted sum of individual consumers` benefits of electricity consumption subject to the utility`s revenue requirement constraints. The consumers` free choice of tariffs is ensured with the so-called self-selection constraints. The relationship between the consumers` optimal choice of tariffs and the weights in the aggregated consumers` benefit function is analyzed. If such weights exist, they will guarantee both the consumers` optimal choice of tariffs and the efficient consumption patterns. Also the welfare effects are analyzed by using demand parameters estimated from a Finnish dynamic pricing experiment. The results indicate that it is possible to design an efficient tariff menu with the welfare losses caused by the self-selection constraints being small compared with the costs created when some consumers choose tariffs other than assigned for them. (author)

  1. Optimal control of bond selectivity in unimolecular reactions

    International Nuclear Information System (INIS)

    Shi Shenghua; Rabitz, H.

    1991-01-01

    The optimal control theory approach to designing optimal fields for bond-selective unimolecular reactions is presented. A set of equations for determining the optimal fields, which will lead to the achievement of the objective of bond-selective dissociation is developed. The numerical procedure given for solving these equations requires the repeated calculation of the time propagator for the system with the time-dependent Hamiltonian. The splitting approximation combined with the fast Fourier transform algorithm is used for computing the short time propagator. As an illustrative example, a model linear triatomic molecule is treated. The model system consists of two Morse oscillators coupled via kinetic coupling. The magnitude of the dipoles of the two Morse oscillators are the same, the fundamental frequencies are almost the same, but the dissociation energies are different. The rather demanding objective under these conditions is to break the stronger bond while leaving the weaker one intact. It is encouraging that the present computational method efficiently gives rise to the optimal field, which leads to the excellent achievement of the objective of bond selective dissociation. (orig.)

  2. Opportunistic relaying in multipath and slow fading channel: Relay selection and optimal relay selection period

    KAUST Repository

    Sungjoon Park,

    2011-11-01

    In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula for opportunistic relaying in the channel, and validate the results by comparing it with the exact outage probability. Also, we suggest a new relay selection algorithm that incorporates shadowing. We consider a protocol of broadcasting the channel gain of the previously selected relay. This saves resources in slow fading channel by reducing collisions in relay selection. We further investigate the optimal relay selection period to maximize the throughput while avoiding selection overhead. © 2011 IEEE.

  3. Optimization of control poison management by dynamic programming

    International Nuclear Information System (INIS)

    Ponzoni Filho, P.

    1974-01-01

    A dynamic programming approach was used to optimize the poison distribution in the core of a nuclear power plant between reloading. This method was applied to a 500 M We PWR subject to two different fuel management policies. The beginning of a stage is marked by a fuel management decision. The state vector of the system is defined by the burnups in the three fuel zones of the core. The change of the state vector is computed in several time steps. A criticality conserving poison management pattern is chosen at the beginning of each step. The burnups at the end of a step are obtained by means of depletion calculations, assuming constant neutron distribution during the step. The violation of burnup and power peaking constraints during the step eliminates the corresponding end states. In the case of identical end states, all except that which produced the largest amount of energy, are eliminated. Among the several end states one is selected for the subsequent stage, when it is subjected to a fuel management decision. This selection is based on an optimally criterion previously chosen, such as: discharged fuel burnup maximization, energy generation cost minimization, etc. (author)

  4. Overview of ONWI'S Salt site selection program

    International Nuclear Information System (INIS)

    Madia, W.J.

    1983-01-01

    In the past year, activities in the salt site selection program of the Office of Nuclear Waste Isolation (ONWI) have focused on narrowing the number and size of areas under consideration as candidate repository sites. The progressive focusing is illustrated. Bedded salt, in the Permian Basin of West Texas and the Paradox Basin of Utah, and salt domes in the Gulf Coast Salt Dome Region (including parts of East Texas, Louisiana, and Mississippi) have been the subjects of geologic, environmental, and socioeconomic characterization of progressively greater detail as the screening process has proceeded. Detailed, field-oriented research and testing have superceded broad-based studies relying heavily on literature and other existing data. Coinciding with the increased field activities has been the publication of results and recommendations resulting from earlier program efforts

  5. Lean and Efficient Software: Whole-Program Optimization of Executables

    Science.gov (United States)

    2015-09-30

    Lean and Efficient Software: Whole-Program Optimization of Executables” Project Summary Report #5 (Report Period: 7/1/2015 to 9/30/2015...TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lean and Efficient Software: Whole-Program Optimization of Executables 5a...unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Lean and Efficient Software: Whole-Program

  6. SMART Optimization of a Parenting Program for Active Duty Families

    Science.gov (United States)

    2017-10-01

    child and caregiver outcomes over time, based on a sample of 200 military personnel and their co- parents who have recently or will soon separate from...AWARD NUMBER: W81XWH-16-1-0407 TITLE: SMART Optimization of a Parenting Program for Active Duty Families PRINCIPAL INVESTIGATOR: Abigail...Optimization of a Parenting Program for Active Duty 5a. CONTRACT NUMBER Families 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Abigail

  7. Optimizing the allocation of resources for genomic selection in one breeding cycle.

    Science.gov (United States)

    Riedelsheimer, Christian; Melchinger, Albrecht E

    2013-11-01

    We developed a universally applicable planning tool for optimizing the allocation of resources for one cycle of genomic selection in a biparental population. The framework combines selection theory with constraint numerical optimization and considers genotype  ×  environment interactions. Genomic selection (GS) is increasingly implemented in plant breeding programs to increase selection gain but little is known how to optimally allocate the resources under a given budget. We investigated this problem with model calculations by combining quantitative genetic selection theory with constraint numerical optimization. We assumed one selection cycle where both the training and prediction sets comprised double haploid (DH) lines from the same biparental population. Grain yield for testcrosses of maize DH lines was used as a model trait but all parameters can be adjusted in a freely available software implementation. An extension of the expected selection accuracy given by Daetwyler et al. (2008) was developed to correctly balance between the number of environments for phenotyping the training set and its population size in the presence of genotype × environment interactions. Under small budget, genotyping costs mainly determine whether GS is superior over phenotypic selection. With increasing budget, flexibility in resource allocation increases greatly but selection gain leveled off quickly requiring balancing the number of populations with the budget spent for each population. The use of an index combining phenotypic and GS predicted values in the training set was especially beneficial under limited resources and large genotype × environment interactions. Once a sufficiently high selection accuracy is achieved in the prediction set, further selection gain can be achieved most efficiently by massively expanding its size. Thus, with increasing budget, reducing the costs for producing a DH line becomes increasingly crucial for successfully exploiting the

  8. A metaheuristic optimization framework for informative gene selection

    Directory of Open Access Journals (Sweden)

    Kaberi Das

    Full Text Available This paper presents a metaheuristic framework using Harmony Search (HS with Genetic Algorithm (GA for gene selection. The internal architecture of the proposed model broadly works in two phases, in the first phase, the model allows the hybridization of HS with GA to compute and evaluate the fitness of the randomly selected solutions of binary strings and then HS ranks the solutions in descending order of their fitness. In the second phase, the offsprings are generated using crossover and mutation operations of GA and finally, those offsprings were selected for the next generation whose fitness value is more than their parents evaluated by SVM classifier. The accuracy of the final gene subsets obtained from this model has been evaluated using SVM classifiers. The merit of this approach is analyzed by experimental results on five benchmark datasets and the results showed an impressive accuracy over existing feature selection approaches. The occurrence of gene subsets selected from this model have also been computed and the most often selected gene subsets with the probability of [0.1–0.9] have been chosen as optimal sets of informative genes. Finally, the performance of those selected informative gene subsets have been measured and established through probabilistic measures. Keywords: Gene Selection, Metaheuristic, Harmony Search Algorithm, Genetic Algorithm, SVM

  9. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  10. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  11. Iterative Selection of Unknown Weights in Direct Weight Optimization Identification

    Directory of Open Access Journals (Sweden)

    Xiao Xuan

    2014-01-01

    Full Text Available To the direct weight optimization identification of the nonlinear system, we add some linear terms about input sequences in the former linear affine function so as to approximate the nonlinear property. To choose the two classes of unknown weights in the more linear terms, this paper derives the detailed process on how to choose these unknown weights from theoretical analysis and engineering practice, respectively, and makes sure of their key roles between the unknown weights. From the theoretical analysis, the added unknown weights’ auxiliary role can be known in the whole process of approximating the nonlinear system. From the practical analysis, we learn how to transform one complex optimization problem to its corresponding common quadratic program problem. Then, the common quadratic program problem can be solved by the basic interior point method. Finally, the efficiency and possibility of the proposed strategies can be confirmed by the simulation results.

  12. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    Directory of Open Access Journals (Sweden)

    Rupert Faltermeier

    2015-01-01

    Full Text Available Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP and intracranial pressure (ICP. Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP, with the outcome of the patients represented by the Glasgow Outcome Scale (GOS. For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  13. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    Science.gov (United States)

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  14. Conversion Rate Optimization : Visual Neuro Programming Principles

    OpenAIRE

    Berezhnaya, Anastasia

    2016-01-01

    The influence of the world wide web has already spread in every business. Consequently, it has become crucial to develop strong online presence and offer qualified user experience for website visitors. Website optimization undeniably has proved its importance in the recent decade. This research was conducted in order to study the practical application and structure of the stages of the CRO (Conversion Rate Optimization) framework that focuses on the most representative website metric – c...

  15. Log-Optimal Portfolio Selection Using the Blackwell Approachability Theorem

    OpenAIRE

    V'yugin, Vladimir

    2014-01-01

    We present a method for constructing the log-optimal portfolio using the well-calibrated forecasts of market values. Dawid's notion of calibration and the Blackwell approachability theorem are used for computing well-calibrated forecasts. We select a portfolio using this "artificial" probability distribution of market values. Our portfolio performs asymptotically at least as well as any stationary portfolio that redistributes the investment at each round using a continuous function of side in...

  16. Optimal Licensing Contracts with Adverse Selection and Informational Rents

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-06-01

    Full Text Available In the paper we analyse a model for determining the optimal licensing contract in both situations of symmetric and asymmetric information between the license’s owner and the potential buyer. Next we present another way of solving the corresponding adverse selection model, using the informational rents as variables. This approach is different from that of Macho-Stadler and Perez-Castrillo.

  17. Optimality Conditions for Fuzzy Number Quadratic Programming with Fuzzy Coefficients

    Directory of Open Access Journals (Sweden)

    Xue-Gang Zhou

    2014-01-01

    Full Text Available The purpose of the present paper is to investigate optimality conditions and duality theory in fuzzy number quadratic programming (FNQP in which the objective function is fuzzy quadratic function with fuzzy number coefficients and the constraint set is fuzzy linear functions with fuzzy number coefficients. Firstly, the equivalent quadratic programming of FNQP is presented by utilizing a linear ranking function and the dual of fuzzy number quadratic programming primal problems is introduced. Secondly, we present optimality conditions for fuzzy number quadratic programming. We then prove several duality results for fuzzy number quadratic programming problems with fuzzy coefficients.

  18. Status of selected air pollution control programs, February 1992

    International Nuclear Information System (INIS)

    1992-02-01

    The collection of status reports has been prepared in order to provide a timely summary of selected EPA air pollution control activities to those individuals who are involved with the implementation of these programs. The report contains ozone/carbon monoxide (CO) programs; mobile sources programs; particulate matter nominally 10M and less (PM-10), sulfur dioxide (SO2) and lead programs; New Source Review (NSR); economics programs; emission standards programs; Indian activity programs; mobile sources programs; air toxics programs; acid rain programs; permits programs; chlorofluorocarbons programs; enforcement programs; and other programs

  19. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  20. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  1. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  2. Optimal processing pathway selection for microalgae-based biorefinery under uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Zaman, Muhammad; Lee, Jay H.

    2015-01-01

    We propose a systematic framework for the selection of optimal processing pathways for a microalgaebased biorefinery under techno-economic uncertainty. The proposed framework promotes robust decision making by taking into account the uncertainties that arise due to inconsistencies among...... and shortage in the available technical information. A stochastic mixed integer nonlinear programming (sMINLP) problem is formulated for determining the optimal biorefinery configurations based on a superstructure model where parameter uncertainties are modeled and included as sampled scenarios. The solution...... the accounting of uncertainty are compared with respect to different objectives. (C) 2015 Elsevier Ltd. All rights reserved....

  3. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  4. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  5. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  6. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    Directory of Open Access Journals (Sweden)

    Huanqing Cui

    2017-03-01

    Full Text Available Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  7. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    Science.gov (United States)

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  8. Pipe degradation investigations for optimization of flow-accelerated corrosion inspection location selection

    International Nuclear Information System (INIS)

    Chandra, S.; Habicht, P.; Chexal, B.; Mahini, R.; McBrine, W.; Esselman, T.; Horowitz, J.

    1995-01-01

    A large amount of piping in a typical nuclear power plant is susceptible to Flow-Accelerated Corrosion (FAC) wall thinning to varying degrees. A typical PAC monitoring program includes the wall thickness measurement of a select number of components in order to judge the structural integrity of entire systems. In order to appropriately allocate resources and maintain an adequate FAC program, it is necessary to optimize the selection of components for inspection by focusing on those components which provide the best indication of system susceptibility to FAC. A better understanding of system FAC predictability and the types of FAC damage encountered can provide some of the insight needed to better focus and optimize the inspection plan for an upcoming refueling outage. Laboratory examination of FAC damaged components removed from service at Northeast Utilities' (NU) nuclear power plants provides a better understanding of the damage mechanisms involved and contributing causes. Selected results of this ongoing study are presented with specific conclusions which will help NU to better focus inspections and thus optimize the ongoing FAC inspection program

  9. Optimal selection of major equipment in dual purpose plants

    International Nuclear Information System (INIS)

    Gabbrielli, E.

    1981-01-01

    Simulation of different operational conditions with the aid of a computer program is one of the best ways of assisting decision-makers in the selection of the most economic mix of equipment for a dual purpose plant. Using this approach this paper deals with the economic comparison of plants consisting of MSF desalinators and combustion gas or back pressure steam turbines coupled to low capacity electric power generators. The comparison is performed on the basis of the data made available by the OPTDIS computer program and the results are given in terms of yearly cost of production as the sum of capital, manpower, maintenance, fuel and chemical costs. (orig.)

  10. Hybrid collaborative optimization based on selection strategy of initial point and adaptive relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Aimin; Yin, Xu; Yuan, Minghai [Hohai University, Changzhou (China)

    2015-09-15

    There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer.

  11. Hybrid collaborative optimization based on selection strategy of initial point and adaptive relaxation

    International Nuclear Information System (INIS)

    Ji, Aimin; Yin, Xu; Yuan, Minghai

    2015-01-01

    There are two problems in Collaborative optimization (CO): (1) the local optima arising from the selection of an inappropriate initial point; (2) the low efficiency and accuracy root in inappropriate relaxation factors. To solve these problems, we first develop the Latin hypercube design (LHD) to determine an initial point of optimization, and then use the non-linear programming by quadratic Lagrangian (NLPQL) to search for the global solution. The effectiveness of the initial point selection strategy is verified by three benchmark functions with some dimensions and different complexities. Then we propose the Adaptive relaxation collaborative optimization (ARCO) algorithm to solve the inconsistency between the system level and the disciplines level, and in this method, the relaxation factors are determined according to the three separated stages of CO respectively. The performance of the ARCO algorithm is compared with the standard collaborative algorithm and the constant relaxation collaborative algorithm with a typical numerical example, which indicates that the ARCO algorithm is more efficient and accurate. Finally, we propose a Hybrid collaborative optimization (HCO) approach, which integrates the selection strategy of initial point with the ARCO algorithm. The results show that HCO can achieve the global optimal solution without the initial value and it also has advantages in convergence, accuracy and robustness. Therefore, the proposed HCO approach can solve the CO problems with applications in the spindle and the speed reducer

  12. Technical specification optimization program - engineered safety features

    International Nuclear Information System (INIS)

    Andre, G.R.; Jansen, R.L.

    1986-01-01

    The Westinghouse Technical Specification Program (TOP) was designed to evaluate on a quantitative basis revisions to Nuclear Power Plant Technical Specifications. The revisions are directed at simplifying plant operation, and reducing unnecessary transients, shutdowns, and manpower requirements. In conjunction with the Westinghouse Owners Group, Westinghouse initiated a program to develop a methodology to justify Technical Specification revisions; particularly revisions related to testing and maintenance requirements on plant operation for instrumentation systems. The methodology was originally developed and applied to the reactor trip features of the reactor protection system (RPS). The current study further refined the methodology and applied it to the engineered safety features of the RPS

  13. Introducing artificial intelligence into structural optimization programs

    International Nuclear Information System (INIS)

    Jozwiak, S.F.

    1987-01-01

    Artificial Intelligence /AI/ is defined as the branch of the computer science concerned with the study of the ideas that enable computers to be intelligent. The main purpose of the application of AI in engineering is to develop computer programs which function better as tools for engineers and designers. Many computer programs today have properties which make them inconvenient to their final users and the research carried within the field of AI provides tools and techniques so that these restriction can be removed. The continuous progress in computer technology has lead to developing efficient computer systems which can be applied to more than simple solving sets of equations. (orig.)

  14. Polyhedral and semidefinite programming methods in combinatorial optimization

    CERN Document Server

    Tunçel, Levent

    2010-01-01

    Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r

  15. Does programmed CTL proliferation optimize virus control?

    DEFF Research Database (Denmark)

    Wodarz, Dominik; Thomsen, Allan Randrup

    2005-01-01

    CD8 T-cell or cytotoxic T-lymphocyte responses develop through an antigen-independent proliferation and differentiation program. This is in contrast to the previous thinking, which was that continuous antigenic stimulation was required. This Opinion discusses why nature has chosen the proliferati...

  16. HOPI: on-line injection optimization program

    International Nuclear Information System (INIS)

    LeMaire, J.L.

    1977-01-01

    A method of matching the beam from the 200 MeV linac to the AGS without the necessity of making emittance measurements is presented. An on-line computer program written on the PDP10 computer performs the matching by modifying independently the horizontal and vertical emittance. Experimental results show success with this method, which can be applied to any matching section

  17. Optimal Operation of Radial Distribution Systems Using Extended Dynamic Programming

    DEFF Research Database (Denmark)

    Lopez, Juan Camilo; Vergara, Pedro P.; Lyra, Christiano

    2018-01-01

    An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation o...... approach is illustrated using real-scale systems and comparisons with commercial programming solvers. Finally, generalizations to consider other EDS operation problems are also discussed.......An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation...... of the EDS by setting the values of the controllable variables at each time period. A suitable definition for the stages of the problem makes it possible to represent the optimal ac power flow of radial EDS as a dynamic programming problem, wherein the 'curse of dimensionality' is a minor concern, since...

  18. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  19. Fire-tube immersion heater optimization program and field heater audit program

    Energy Technology Data Exchange (ETDEWEB)

    Croteau, P. [Petro-Canada, Calgary, AB (Canada)

    2007-07-01

    This presentation provided an overview of the top 5 priorities for emission reduction and eco-efficiency by the Petroleum Technology Alliance of Canada (PTAC). These included venting of methane emissions; fuel consumption in reciprocating engines; fuel consumption in fired heaters; flaring and incineration; and fugitive emissions. It described the common concern for many upstream operating companies as being energy consumption associated with immersion heaters. PTAC fire-tube heater and line heater studies were presented. Combustion efficiency was discussed in terms of excess air, fire-tube selection, heat flux rate, and reliability guidelines. Other topics included heat transfer and fire-tube design; burner selection; burner duty cycle; heater tune up inspection procedure; and insulation. Two other programs were also discussed, notably a Petro-Canada fire-tube immersion heater optimization program and the field audit program run by Natural Resources Canada. It was concluded that improved efficiency involves training; managing excess air in combustion; managing the burner duty cycle; striving for 82 per cent combustion efficiency; and providing adequate insulation to reduce energy demand. tabs., figs.

  20. Non-linear programming method in optimization of fast reactors

    International Nuclear Information System (INIS)

    Pavelesku, M.; Dumitresku, Kh.; Adam, S.

    1975-01-01

    Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)

  1. Sequential Quadratic Programming Algorithms for Optimization

    Science.gov (United States)

    1989-08-01

    quadratic program- ma ng (SQ(2l ) aIiatain.seenis to be relgarded aIs tie( buest choice for the solution of smiall. dlense problema (see S tour L)toS...For the step along d, note that a < nOing + 3 szH + i3.ninA A a K f~Iz,;nd and from Id1 _< ,,, we must have that for some /3 , np , 11P11 < dn"p. 5.2...Nevertheless, many of these problems are considered hard to solve. Moreover, for some of these problems the assumptions made in Chapter 2 to establish the

  2. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  3. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Directory of Open Access Journals (Sweden)

    Huan Chen

    Full Text Available This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN. Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  4. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    Science.gov (United States)

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  5. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James

    2009-11-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  6. Optimizing Energy and Modulation Selection in Multi-Resolution Modulation For Wireless Video Broadcast/Multicast

    KAUST Repository

    She, James; Ho, Pin-Han; Shihada, Basem

    2009-01-01

    Emerging technologies in Broadband Wireless Access (BWA) networks and video coding have enabled high-quality wireless video broadcast/multicast services in metropolitan areas. Joint source-channel coded wireless transmission, especially using hierarchical/superposition coded modulation at the channel, is recognized as an effective and scalable approach to increase the system scalability while tackling the multi-user channel diversity problem. The power allocation and modulation selection problem, however, is subject to a high computational complexity due to the nonlinear formulation and huge solution space. This paper introduces a dynamic programming framework with conditioned parsing, which significantly reduces the search space. The optimized result is further verified with experiments using real video content. The proposed approach effectively serves as a generalized and practical optimization framework that can gauge and optimize a scalable wireless video broadcast/multicast based on multi-resolution modulation in any BWA network.

  7. Portfolio optimization for seed selection in diverse weather scenarios.

    Science.gov (United States)

    Marko, Oskar; Brdar, Sanja; Panić, Marko; Šašić, Isidora; Despotović, Danica; Knežević, Milivoje; Crnojević, Vladimir

    2017-01-01

    The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  8. Portfolio optimization for seed selection in diverse weather scenarios.

    Directory of Open Access Journals (Sweden)

    Oskar Marko

    Full Text Available The aim of this work was to develop a method for selection of optimal soybean varieties for the American Midwest using data analytics. We extracted the knowledge about 174 varieties from the dataset, which contained information about weather, soil, yield and regional statistical parameters. Next, we predicted the yield of each variety in each of 6,490 observed subregions of the Midwest. Furthermore, yield was predicted for all the possible weather scenarios approximated by 15 historical weather instances contained in the dataset. Using predicted yields and covariance between varieties through different weather scenarios, we performed portfolio optimisation. In this way, for each subregion, we obtained a selection of varieties, that proved superior to others in terms of the amount and stability of yield. According to the rules of Syngenta Crop Challenge, for which this research was conducted, we aggregated the results across all subregions and selected up to five soybean varieties that should be distributed across the network of seed retailers. The work presented in this paper was the winning solution for Syngenta Crop Challenge 2017.

  9. A Study of Joint Cost Inclusion in Linear Programming Optimization

    Directory of Open Access Journals (Sweden)

    P. Armaos

    2013-08-01

    Full Text Available The concept of Structural Optimization has been a topic or research over the past century. Linear Programming Optimization has proved being the most reliable method of structural optimization. Global advances in linear programming optimization have been recently powered by University of Sheffield researchers, to include joint cost, self-weight and buckling considerations. A joint cost inclusion scopes to reduce the number of joints existing in an optimized structural solution, transforming it to a practically viable solution. The topic of the current paper is to investigate the effects of joint cost inclusion, as this is currently implemented in the optimization code. An extended literature review on this subject was conducted prior to familiarization with small scale optimization software. Using IntelliFORM software, a structured series of problems were set and analyzed. The joint cost tests examined benchmark problems and their consequent changes in the member topology, as the design domain was expanding. The findings of the analyses were remarkable and are being commented further on. The distinct topologies of solutions created by optimization processes are also recognized. Finally an alternative strategy of penalizing joints is presented.

  10. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  11. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  12. Generic Optimization Program User Manual Version 3.0.0

    International Nuclear Information System (INIS)

    Wetter, Michael

    2009-01-01

    GenOpt is an optimization program for the minimization of a cost function that is evaluated by an external simulation program. It has been developed for optimization problems where the cost function is computationally expensive and its derivatives are not available or may not even exist. GenOpt can be coupled to any simulation program that reads its input from text files and writes its output to text files. The independent variables can be continuous variables (possibly with lower and upper bounds), discrete variables, or both, continuous and discrete variables. Constraints on dependent variables can be implemented using penalty or barrier functions. GenOpt uses parallel computing to evaluate the simulations. GenOpt has a library with local and global multi-dimensional and one-dimensional optimization algorithms, and algorithms for doing parametric runs. An algorithm interface allows adding new minimization algorithms without knowing the details of the program structure. GenOpt is written in Java so that it is platform independent. The platform independence and the general interface make GenOpt applicable to a wide range of optimization problems. GenOpt has not been designed for linear programming problems, quadratic programming problems, and problems where the gradient of the cost function is available. For such problems, as well as for other problems, special tailored software exists that is more efficient

  13. Generic Optimization Program User Manual Version 3.0.0

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2009-05-11

    GenOpt is an optimization program for the minimization of a cost function that is evaluated by an external simulation program. It has been developed for optimization problems where the cost function is computationally expensive and its derivatives are not available or may not even exist. GenOpt can be coupled to any simulation program that reads its input from text files and writes its output to text files. The independent variables can be continuous variables (possibly with lower and upper bounds), discrete variables, or both, continuous and discrete variables. Constraints on dependent variables can be implemented using penalty or barrier functions. GenOpt uses parallel computing to evaluate the simulations. GenOpt has a library with local and global multi-dimensional and one-dimensional optimization algorithms, and algorithms for doing parametric runs. An algorithm interface allows adding new minimization algorithms without knowing the details of the program structure. GenOpt is written in Java so that it is platform independent. The platform independence and the general interface make GenOpt applicable to a wide range of optimization problems. GenOpt has not been designed for linear programming problems, quadratic programming problems, and problems where the gradient of the cost function is available. For such problems, as well as for other problems, special tailored software exists that is more efficient.

  14. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    Science.gov (United States)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  15. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  16. Optimization of a pump-pipe system by dynamic programming

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui; Ferreira, Jose S.

    1984-01-01

    In this paper the problem of minimizing the total cost of a pump-pipe system in series is considered. The route of the pipeline and the number of pumping stations are known. The optimization will then consist in determining the control variables, diameter and thickness of the pipe and the size of...... of the pumps. A general mathematical model is formulated and Dynamic Programming is used to find an optimal solution....

  17. Penempatan Optimal Phasor Measurement Unit (PMU) Dengan Integer Programming

    OpenAIRE

    Amrulloh, Yunan Helmy

    2013-01-01

    Phasor Measurement Unit (PMU) merupakan peralatan yang mampu memberikan pengukuran fasor tegangan dan arus secara real-time. PMU dapat digunakan untuk monitoring, proteksi dan kontrol pada sistem tenaga listrik. Tugas akhir ini membahas penempatan PMU secara optimal berdasarkan topologi jaringan sehingga sistem tenaga listrik dapat diobservasi. Penempatan optimal PMU dirumuskan sebagai masalah Binary Integer Programming (BIP) yang akan memberikan variabel dengan pilihan nilai (0,1) yang menu...

  18. About the use of vector optimization for company's contractors selection

    Science.gov (United States)

    Medvedeva, M. A.; Medvedev, M. A.

    2017-07-01

    For effective functioning of an enterprise it is necessary to make a right choice of partners: suppliers of raw material, buyers of finished products, and others with which the company interacts in the course of their business. However, the presence on the market of big amount of enterprises makes the choice of the most appropriate among them very difficult and requires the ability to objectively assess of the possible partners, based on multilateral analysis of their activities. This analysis can be carried out based on the solution of multiobjective problem of mathematical programming by using the methods of vector optimization. The present work addresses the theoretical foundations of such approach and also describes an algorithm realizing proposed method on practical example.

  19. Criteria for Selecting Optimal Nitrogen Fertilizer Rates for Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Francesco Basso

    2011-02-01

    Full Text Available Yield rates vary spatially and maps produced by the yield monitor systems are evidence of the degree of withinfield variability. The magnitude of this variability is a good indication of the suitability of implementing a spatially variable management plan. Crop simulation models have the potential to integrate the effects of temporal and multiple stress interaction on crop growth under different environmental and management conditions. The strength of these models is their ability to account for stress by simulating the temporal interaction of stress on plant growth each day during the season. The objective of paper is to present a procedure that allows for the selection of optimal nitrogen fertilizer rates to be applied spatially on previously identified management zones through crop simulation modelling. The integration of yield maps, remote sensing imagery, ground truth measurements, electrical resistivity imaging allowed for the identifications of three distinct management zones based on their ability to produce yield and their stability over time (Basso et al., 2009. After validating the model, we simulated 7 N rates from 0 to 180 kg N/ha with a 30 kg N/ha increment. The model results illustrate the different N responses for each of the zone. The analysis allowed us to identify the optimal N rate for each of the zone based on agronomic, economic and environmental sustainability of N management.

  20. Making the Optimal Decision in Selecting Protective Clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2008-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it applies to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'

  1. Criteria for Selecting Optimal Nitrogen Fertilizer Rates for Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Bruno Basso

    Full Text Available Yield rates vary spatially and maps produced by the yield monitor systems are evidence of the degree of withinfield variability. The magnitude of this variability is a good indication of the suitability of implementing a spatially variable management plan. Crop simulation models have the potential to integrate the effects of temporal and multiple stress interaction on crop growth under different environmental and management conditions. The strength of these models is their ability to account for stress by simulating the temporal interaction of stress on plant growth each day during the season. The objective of paper is to present a procedure that allows for the selection of optimal nitrogen fertilizer rates to be applied spatially on previously identified management zones through crop simulation modelling. The integration of yield maps, remote sensing imagery, ground truth measurements, electrical resistivity imaging allowed for the identifications of three distinct management zones based on their ability to produce yield and their stability over time (Basso et al., 2009. After validating the model, we simulated 7 N rates from 0 to 180 kg N/ha with a 30 kg N/ha increment. The model results illustrate the different N responses for each of the zone. The analysis allowed us to identify the optimal N rate for each of the zone based on agronomic, economic and environmental sustainability of N management.

  2. Criteria for Selecting Optimal Nitrogen Fertilizer Rates for Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Bruno Basso

    2009-12-01

    Full Text Available Yield rates vary spatially and maps produced by the yield monitor systems are evidence of the degree of withinfield variability. The magnitude of this variability is a good indication of the suitability of implementing a spatially variable management plan. Crop simulation models have the potential to integrate the effects of temporal and multiple stress interaction on crop growth under different environmental and management conditions. The strength of these models is their ability to account for stress by simulating the temporal interaction of stress on plant growth each day during the season. The objective of paper is to present a procedure that allows for the selection of optimal nitrogen fertilizer rates to be applied spatially on previously identified management zones through crop simulation modelling. The integration of yield maps, remote sensing imagery, ground truth measurements, electrical resistivity imaging allowed for the identifications of three distinct management zones based on their ability to produce yield and their stability over time (Basso et al., 2009. After validating the model, we simulated 7 N rates from 0 to 180 kg N/ha with a 30 kg N/ha increment. The model results illustrate the different N responses for each of the zone. The analysis allowed us to identify the optimal N rate for each of the zone based on agronomic, economic and environmental sustainability of N management.

  3. GPAW optimized for Blue Gene/P using hybrid programming

    DEFF Research Database (Denmark)

    Kristensen, Mads Ruben Burgdorff; Happe, Hans Henrik; Vinter, Brian

    2009-01-01

    In this work we present optimizations of a Grid-based projector-augmented wave method software, GPAW for the Blue Gene/P architecture. The improvements are achieved by exploring the advantage of shared and distributed memory programming also known as hybrid programming. The work focuses on optimi......In this work we present optimizations of a Grid-based projector-augmented wave method software, GPAW for the Blue Gene/P architecture. The improvements are achieved by exploring the advantage of shared and distributed memory programming also known as hybrid programming. The work focuses...... on optimizing a very time consuming operation in GPAW, the finite-different stencil operation, and different hybrid programming approaches are evaluated. The work succeeds in demonstrating a hybrid programming model which is clearly beneficial compared to the original flat programming model. In total...... an improvement of 1.94 compared to the original implementation is obtained. The results we demonstrate here are reasonably general and may be applied to other finite difference codes....

  4. Lean and Efficient Software: Whole Program Optimization of Executables

    Science.gov (United States)

    2016-12-31

    19b. TELEPHONE NUMBER (Include area code) 12/31/2016 Final Technical Report (Phase I - Base Period) 30-06-2014 - 31-12-2016 Lean and Efficient...Software: Whole-Program Optimization of Executables Final Report Evan Driscoll Tom Johnson GrammaTech, Inc. 531 Esty Street Ithaca, NY 14850 Office of...hardening U U U UU 30 Tom Johnson (607) 273-7340 x.134 Page 1 of 30 “ Lean and Efficient Software: Whole-Program Optimization of Executables

  5. Optimized bioregenerative space diet selection with crew choice

    Science.gov (United States)

    Vicens, Carrie; Wang, Carolyn; Olabi, Ammar; Jackson, Peter; Hunter, Jean

    2003-01-01

    Previous studies on optimization of crew diets have not accounted for choice. A diet selection model with crew choice was developed. Scenario analyses were conducted to assess the feasibility and cost of certain crew preferences, such as preferences for numerous-desserts, high-salt, and high-acceptability foods. For comparison purposes, a no-choice and a random-choice scenario were considered. The model was found to be feasible in terms of food variety and overall costs. The numerous-desserts, high-acceptability, and random-choice scenarios all resulted in feasible solutions costing between 13.2 and 17.3 kg ESM/person-day. Only the high-sodium scenario yielded an infeasible solution. This occurred when the foods highest in salt content were selected for the crew-choice portion of the diet. This infeasibility can be avoided by limiting the total sodium content in the crew-choice portion of the diet. Cost savings were found by reducing food variety in scenarios where the preference bias strongly affected nutritional content.

  6. Simulation for Nurse Anesthesia Program Selection: Redesigned

    Science.gov (United States)

    Roebuck, John Arthur

    2017-01-01

    Purpose: This project is meant to answer the research question: What applicant character traits do Nurse Anesthesia Program Directors and Faculty identify as favorable predictors for successful completion of a nurse anesthesia program, and what evaluation methods are best to evaluate these traits in prospective students? Methods: A prospective…

  7. Evaluating and Selecting Sport Management Undergraduate Programs.

    Science.gov (United States)

    Cuneen, Jacquelyn; Sidwell, M. Joy

    1998-01-01

    States that the accelerated growth of sport management undergraduate programs that began in the 1980s has continued into the current decade. There are currently 180 sport management major programs in American colleges and universities. Describes the sports management approval process and suggests useful strategies to evaluate sport management…

  8. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    Science.gov (United States)

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  9. TRU Waste Management Program. Cost/schedule optimization analysis

    International Nuclear Information System (INIS)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.; Hastings, G.A.

    1985-10-01

    This Current Year Work Plan presents in detail a description of the activities to be performed by the Joint Integration Office Rockwell International (JIO/RI) during FY86. It breaks down the activities into two major work areas: Program Management and Program Analysis. Program Management is performed by the JIO/RI by providing technical planning and guidance for the development of advanced TRU waste management capabilities. This includes equipment/facility design, engineering, construction, and operations. These functions are integrated to allow transition from interim storage to final disposition. JIO/RI tasks include program requirements identification, long-range technical planning, budget development, program planning document preparation, task guidance development, task monitoring, task progress information gathering and reporting to DOE, interfacing with other agencies and DOE lead programs, integrating public involvement with program efforts, and preparation of reports for DOE detailing program status. Program Analysis is performed by the JIO/RI to support identification and assessment of alternatives, and development of long-term TRU waste program capabilities. These analyses include short-term analyses in response to DOE information requests, along with performing an RH Cost/Schedule Optimization report. Systems models will be developed, updated, and upgraded as needed to enhance JIO/RI's capability to evaluate the adequacy of program efforts in various fields. A TRU program data base will be maintained and updated to provide DOE with timely responses to inventory related questions

  10. Optimal traffic control in highway transportation networks using linear programming

    KAUST Repository

    Li, Yanning; Canepa, Edward S.; Claudel, Christian G.

    2014-01-01

    of the Hamilton-Jacobi PDE, the problem of controlling the state of the system on a network link in a finite horizon can be posed as a Linear Program. Assuming all intersections in the network are controllable, we show that the optimization approach can

  11. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  12. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  13. Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization

    Science.gov (United States)

    Gelman, Andrew; Lee, Daniel; Guo, Jiqiang

    2015-01-01

    Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…

  14. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng; Yuan, Ganzhao; Ghanem, Bernard

    2013-01-01

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  15. ROTAX: a nonlinear optimization program by axes rotation method

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1977-09-01

    A nonlinear optimization program employing the axes rotation method has been developed for solving nonlinear problems subject to nonlinear inequality constraints and its stability and convergence efficiency were examined. The axes rotation method is a direct search of the optimum point by rotating the orthogonal coordinate system in a direction giving the minimum objective. The searching direction is rotated freely in multi-dimensional space, so the method is effective for the problems represented with the contours having deep curved valleys. In application of the axes rotation method to the optimization problems subject to nonlinear inequality constraints, an improved version of R.R. Allran and S.E.J. Johnsen's method is used, which deals with a new objective function composed of the original objective and a penalty term to consider the inequality constraints. The program is incorporated in optimization code system SCOOP. (auth.)

  16. Discrete Analysis of Portfolio Selection with Optimal Stopping Time

    Directory of Open Access Journals (Sweden)

    Jianfeng Liang

    2009-01-01

    Full Text Available Most of the investments in practice are carried out without certain horizons. There are many factors to drive investment to a stop. In this paper, we consider a portfolio selection policy with market-related stopping time. Particularly, we assume that the investor exits the market once his wealth reaches a given investment target or falls below a bankruptcy threshold. Our objective is to minimize the expected time when the investment target is obtained, at the same time, we guarantee the probability that bankruptcy happens is no larger than a given level. We formulate the problem as a mix integer linear programming model and make analysis of the model by using a numerical example.

  17. Using linear programming to analyze and optimize stochastic flow lines

    DEFF Research Database (Denmark)

    Helber, Stefan; Schimmelpfeng, Katja; Stolletz, Raik

    2011-01-01

    This paper presents a linear programming approach to analyze and optimize flow lines with limited buffer capacities and stochastic processing times. The basic idea is to solve a huge but simple linear program that models an entire simulation run of a multi-stage production process in discrete time...... programming and hence allows us to solve buffer allocation problems. We show under which conditions our method works well by comparing its results to exact values for two-machine models and approximate simulation results for longer lines....

  18. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    Science.gov (United States)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  19. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  20. Optimal heavy tail estimation – Part 1: Order selection

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2017-12-01

    Full Text Available The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1 serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  1. Making the optimal decision in selecting protective clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2007-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  2. Selected charts: National Waste Terminal Storage Program

    International Nuclear Information System (INIS)

    1977-01-01

    Staff members of the Office of Waste Isolation on October 21, 1977 reviewed the status of the OWI Waste Management Program for Commissioner E.E. Varanini III, State of California Energy Resources Conservation and Development Commission, and members of his staff. Copies of the viewgraphs and 35-mm slides shown at the briefing are compiled

  3. Defense Acquisitions: Assessments of Selected Weapon Programs

    Science.gov (United States)

    2015-03-01

    is designed to detect, acquire, intercept , and destroy a range of airborne threats. Block II includes hardware and software upgrades intended to... moisture intrusion and degraded performance. Additionally, the program has made changes to the Global Positioning System (GPS) receivers, due to a...preliminary design review in fiscal year 2017. An Independent Review Team identified three critical technologies, advanced inlet particle separator

  4. The optimization of demand response programs in smart grids

    International Nuclear Information System (INIS)

    Derakhshan, Ghasem; Shayanfar, Heidar Ali; Kazemi, Ahad

    2016-01-01

    The potential to schedule portion of the electricity demand in smart energy systems is clear as a significant opportunity to enhance the efficiency of the grids. Demand response is one of the new developments in the field of electricity which is meant to engage consumers in improving the energy consumption pattern. We used Teaching & Learning based Optimization (TLBO) and Shuffled Frog Leaping (SFL) algorithms to propose an optimization model for consumption scheduling in smart grid when payment costs of different periods are reduced. This study conducted on four types residential consumers obtained in the summer for some residential houses located in the centre of Tehran city in Iran: first with time of use pricing, second with real-time pricing, third one with critical peak pricing, and the last consumer had no tariff for pricing. The results demonstrate that the adoption of demand response programs can reduce total payment costs and determine a more efficient use of optimization techniques. - Highlights: •An optimization model for the demand response program is made. •TLBO and SFL algorithms are applied to reduce payment costs in smart grid. •The optimal condition is provided for the maximization of the social welfare problem. •An application to some residential houses located in the centre of Tehran city in Iran is demonstrated.

  5. Pricing Strategy Selection Using Fuzzy Linear Programming

    OpenAIRE

    Elif Alaybeyoğlu; Y. Esra Albayrak

    2013-01-01

    Marketing establishes a communication network between producers and consumers. Nowadays, marketing approach is customer-focused and products are directly oriented to meet customer needs. Marketing, which is a long process, needs organization and management. Therefore strategic marketing planning becomes more and more important in today’s competitive conditions. Main focus of this paper is to evaluate pricing strategies and select the best pricing strategy solution while considering internal a...

  6. Defense Acquisitions: Assessments of Selected Weapon Programs

    Science.gov (United States)

    2017-03-01

    decision out to the third quarter of fiscal year 2018. In the fiscal year 2017 budget , the Marine Corps funded ACV Increment 1.1 to the level identified...option. The program noted the Air Force will need to make a preliminary decision on an Increment 2 by summer 2017 for budgeting purposes, though the...Armored Multi-Purpose Vehicle (AMPV) 67 Common Infrared Countermeasure (CIRCM) 69 Indirect Fire Protection Capability Increment 2-Intercept Block 1

  7. Medicaid program choice, inertia and adverse selection.

    Science.gov (United States)

    Marton, James; Yelowitz, Aaron; Talbert, Jeffery C

    2017-12-01

    In 2012, Kentucky implemented Medicaid managed care statewide, auto-assigned enrollees to three plans, and allowed switching. Using administrative data, we find that the state's auto-assignment algorithm most heavily weighted cost-minimization and plan balancing, and placed little weight on the quality of the enrollee-plan match. Immobility - apparently driven by health plan inertia - contributed to the success of the cost-minimization strategy, as more than half of enrollees auto-assigned to even the lowest quality plans did not opt-out. High-cost enrollees were more likely to opt-out of their auto-assigned plan, creating adverse selection. The plan with arguably the highest quality incurred the largest initial profit margin reduction due to adverse selection prior to risk adjustment, as it attracted a disproportionate share of high-cost enrollees. The presence of such selection, caused by differential degrees of mobility, raises concerns about the long run viability of the Medicaid managed care market without such risk adjustment. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Enhanced index tracking modeling in portfolio optimization with mixed-integer programming z approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of portfolio management in stock market investment. Enhanced index tracking aims to construct an optimal portfolio to generate excess return over the return achieved by the stock market index without purchasing all of the stocks that make up the index. The objective of this paper is to construct an optimal portfolio using mixed-integer programming model which adopts regression approach in order to generate higher portfolio mean return than stock market index return. In this study, the data consists of 24 component stocks in Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index from January 2010 until December 2012. The results of this study show that the optimal portfolio of mixed-integer programming model is able to generate higher mean return than FTSE Bursa Malaysia Kuala Lumpur Composite Index return with only selecting 30% out of the total stock market index components.

  9. Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    Chiang Mung

    2006-01-01

    Full Text Available The problem of choosing the optimal multipath components to be employed at a minimum mean square error (MMSE selective Rake receiver is considered for an impulse radio ultra-wideband system. First, the optimal finger selection problem is formulated as an integer programming problem with a nonconvex objective function. Then, the objective function is approximated by a convex function and the integer programming problem is solved by means of constraint relaxation techniques. The proposed algorithms are suboptimal due to the approximate objective function and the constraint relaxation steps. However, they perform better than the conventional finger selection algorithm, which is suboptimal since it ignores the correlation between multipath components, and they can get quite close to the optimal scheme that cannot be implemented in practice due to its complexity. In addition to the convex relaxation techniques, a genetic-algorithm- (GA- based approach is proposed, which does not need any approximations or integer relaxations. This iterative algorithm is based on the direct evaluation of the objective function, and can achieve near-optimal performance with a reasonable number of iterations. Simulation results are presented to compare the performance of the proposed finger selection algorithms with that of the conventional and the optimal schemes.

  10. Portfolio selection problem: a comparison of fuzzy goal programming and linear physical programming

    Directory of Open Access Journals (Sweden)

    Fusun Kucukbay

    2016-04-01

    Full Text Available Investors have limited budget and they try to maximize their return with minimum risk. Therefore this study aims to deal with the portfolio selection problem. In the study two criteria are considered which are expected return, and risk. In this respect, linear physical programming (LPP technique is applied on Bist 100 stocks to be able to find out the optimum portfolio. The analysis covers the period April 2009- March 2015. This period is divided into two; April 2009-March 2014 and April 2014 – March 2015. April 2009-March 2014 period is used as data to find an optimal solution. April 2014-March 2015 period is used to test the real performance of portfolios. The performance of the obtained portfolio is compared with that obtained from fuzzy goal programming (FGP. Then the performances of both method, LPP and FGP are compared with BIST 100 in terms of their Sharpe Indexes. The findings reveal that LPP for portfolio selection problem is a good alternative to FGP.

  11. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  12. Ultra-fast fluence optimization for beam angle selection algorithms

    Science.gov (United States)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  13. Ultra-fast fluence optimization for beam angle selection algorithms

    International Nuclear Information System (INIS)

    Bangert, M; Ziegenhein, P; Oelfke, U

    2014-01-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ∼ 4 s. BAS runs including FOs for 1000 different beam ensembles take ∼ 15–70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  14. Nonlinear Time Series Prediction Using LS-SVM with Chaotic Mutation Evolutionary Programming for Parameter Optimization

    International Nuclear Information System (INIS)

    Xu Ruirui; Chen Tianlun; Gao Chengfeng

    2006-01-01

    Nonlinear time series prediction is studied by using an improved least squares support vector machine (LS-SVM) regression based on chaotic mutation evolutionary programming (CMEP) approach for parameter optimization. We analyze how the prediction error varies with different parameters (σ, γ) in LS-SVM. In order to select appropriate parameters for the prediction model, we employ CMEP algorithm. Finally, Nasdaq stock data are predicted by using this LS-SVM regression based on CMEP, and satisfactory results are obtained.

  15. Stress-constrained truss topology optimization problems that can be solved by linear programming

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Svanberg, Krister

    2004-01-01

    We consider the problem of simultaneously selecting the material and determining the area of each bar in a truss structure in such a way that the cost of the structure is minimized subject to stress constraints under a single load condition. We show that such problems can be solved by linear...... programming to give the global optimum, and that two different materials are always sufficient in an optimal structure....

  16. Multi-Objective Stochastic Optimization Programs for a Non-Life Insurance Company under Solvency Constraints

    Directory of Open Access Journals (Sweden)

    Massimiliano Kaucic

    2015-09-01

    Full Text Available In the paper, we introduce a multi-objective scenario-based optimization approach for chance-constrained portfolio selection problems. More specifically, a modified version of the normal constraint method is implemented with a global solver in order to generate a dotted approximation of the Pareto frontier for bi- and tri-objective programming problems. Numerical experiments are carried out on a set of portfolios to be optimized for an EU-based non-life insurance company. Both performance indicators and risk measures are managed as objectives. Results show that this procedure is effective and readily applicable to achieve suitable risk-reward tradeoff analysis.

  17. Penempatan Optimal Phasor Measurement Unit (PMU dengan Integer Programming

    Directory of Open Access Journals (Sweden)

    Yunan Helmy Amrulloh

    2013-09-01

    Full Text Available Phasor Measurement Unit (PMU merupakan peralatan yang mampu memberikan pengukuran fasor tegangan dan arus secara real-time. PMU dapat digunakan untuk monitoring, proteksi dan kontrol pada sistem tenaga listrik. Tugas akhir ini membahas penempatan PMU secara optimal berdasarkan topologi jaringan sehingga sistem tenaga listrik  dapat diobservasi. Penempatan optimal PMU dirumuskan sebagai masalah Binary Integer Programming (BIP yang akan memberikan variabel dengan pilihan nilai (0,1 yang menunjukkan tempat yang harus dipasang PMU. Dalam tugas akhir ini, BIP diterapkan untuk menyelesaikan masalah penempatan PMU secara optimal pada sistem tenaga listrik  Jawa-Bali 500 KV yang selanjutnya diterapkan dengan penambahan konsep incomplete observability. Hasil simulasi menunjukkan bahwa penerapan BIP pada sistem dengan incomplete observability memberikan jumlah PMU yang lebih sedikit dibandingkan dengan sistem tanpa konsep incomplete observability.

  18. Optimal traffic control in highway transportation networks using linear programming

    KAUST Repository

    Li, Yanning

    2014-06-01

    This article presents a framework for the optimal control of boundary flows on transportation networks. The state of the system is modeled by a first order scalar conservation law (Lighthill-Whitham-Richards PDE). Based on an equivalent formulation of the Hamilton-Jacobi PDE, the problem of controlling the state of the system on a network link in a finite horizon can be posed as a Linear Program. Assuming all intersections in the network are controllable, we show that the optimization approach can be extended to an arbitrary transportation network, preserving linear constraints. Unlike previously investigated transportation network control schemes, this framework leverages the intrinsic properties of the Halmilton-Jacobi equation, and does not require any discretization or boolean variables on the link. Hence this framework is very computational efficient and provides the globally optimal solution. The feasibility of this framework is illustrated by an on-ramp metering control example.

  19. Optimal fringe angle selection for digital fringe projection technique.

    Science.gov (United States)

    Wang, Yajun; Zhang, Song

    2013-10-10

    Existing digital fringe projection (DFP) systems mainly use either horizontal or vertical fringe patterns for three-dimensional shape measurement. This paper reveals that these two fringe directions are usually not optimal where the phase change is the largest to a given depth variation. We propose a novel and efficient method to determine the optimal fringe angle by projecting a set of horizontal and vertical fringe patterns onto a step-height object and by further analyzing two resultant phase maps. Experiments demonstrate the existence of the optimal angle and the success of the proposed optimal angle determination method.

  20. Adaptive dynamic programming with applications in optimal control

    CERN Document Server

    Liu, Derong; Wang, Ding; Yang, Xiong; Li, Hongliang

    2017-01-01

    This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP app...

  1. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos

    2009-01-01

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature

  2. An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br

    2009-04-15

    The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.

  3. SEWER NETWORK DISCHARGE OPTIMIZATION USING THE DYNAMIC PROGRAMMING

    Directory of Open Access Journals (Sweden)

    Viorel MINZU

    2015-12-01

    Full Text Available It is necessary to adopt an optimal control that allows an efficient usage of the existing sewer networks, in order to avoid the building of new retention facilities. The main objective of the control action is to minimize the overflow volume of a sewer network. This paper proposes a method to apply a solution obtained by discrete dynamic programming through a realistic closed loop system.

  4. How to Use Linear Programming for Information System Performances Optimization

    Directory of Open Access Journals (Sweden)

    Hell Marko

    2014-09-01

    Full Text Available Background: Organisations nowadays operate in a very dynamic environment, and therefore, their ability of continuously adjusting the strategic plan to the new conditions is a must for achieving their strategic objectives. BSC is a well-known methodology for measuring performances enabling organizations to learn how well they are doing. In this paper, “BSC for IS” will be proposed in order to measure the IS impact on the achievement of organizations’ business goals. Objectives: The objective of this paper is to present the original procedure which is used to enhance the BSC methodology in planning the optimal targets of IS performances value in order to maximize the organization's effectiveness. Methods/Approach: The method used in this paper is the quantitative methodology - linear programming. In the case study, linear programming is used for optimizing organization’s strategic performance. Results: Results are shown on the example of a case study national park. An optimal performance value for the strategic objective has been calculated, as well as an optimal performance value for each DO (derived objective. Results are calculated in Excel, using Solver Add-in. Conclusions: The presentation of methodology through the case study of a national park shows that this methodology, though it requires a high level of formalisation, provides a very transparent performance calculation.

  5. Spectral Quantitative Analysis Model with Combining Wavelength Selection and Topology Structure Optimization

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2016-01-01

    Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.

  6. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  7. Optimal Selective Harmonic Control for Power Harmonics Mitigation

    DEFF Research Database (Denmark)

    Zhou, Keliang; Yang, Yongheng; Blaabjerg, Frede

    2015-01-01

    of power harmonics. The proposed optimal SHC is of hybrid structure: all recursive SHC modules with weighted gains are connected in parallel. It bridges the real “nk+-m order RC” and the complex “parallel structure RC”. Compared to other IMP based control solutions, it offers an optimal trade-off among...

  8. A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2010-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  9. Software for industrial consumers electrical energy tariff optimal selection

    OpenAIRE

    Simona Ardelean; A. Ceclan; L. Czumbil; D. D. Micu; E. Simion

    2008-01-01

    This paper briefly presents someelectrical energy management techniques andproposes a software product dedicated forautomatic choose of the optimal tariff structure forindustrial consumers. The optimal choose ofelectrical energy invoicing model proves to be anefficient way to bring quality and economies in anycompanies administration. Advanced description ofthe proposed software is also presented.

  10. On the Selection of Optimal Index Configuration in OO Databases

    NARCIS (Netherlands)

    Choenni, R.S.; Bertino, E.; Blanken, Henk; Chang, S.C.

    An operation in object-oriented databases gives rise to the processing of a path. Several database operations may result into the same path. The authors address the problem of optimal index configuration for a single path. As it is shown an optimal index configuration for a path can be achieved by

  11. A polynomial optimization approach to constant rebalanced portfolio selection

    NARCIS (Netherlands)

    Takano, Y.; Sotirov, R.

    2012-01-01

    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on

  12. Optimal investment in a portfolio of HIV prevention programs.

    Science.gov (United States)

    Zaric, G S; Brandeau, M L

    2001-01-01

    In this article, the authors determine the optimal allocation of HIV prevention funds and investigate the impact of different allocation methods on health outcomes. The authors present a resource allocation model that can be used to determine the allocation of HIV prevention funds that maximizes quality-adjusted life years (or life years) gained or HIV infections averted in a population over a specified time horizon. They apply the model to determine the allocation of a limited budget among 3 types of HIV prevention programs in a population of injection drug users and nonusers: needle exchange programs, methadone maintenance treatment, and condom availability programs. For each prevention program, the authors estimate a production function that relates the amount invested to the associated change in risky behavior. The authors determine the optimal allocation of funds for both objective functions for a high-prevalence population and a low-prevalence population. They also consider the allocation of funds under several common rules of thumb that are used to allocate HIV prevention resources. It is shown that simpler allocation methods (e.g., allocation based on HIV incidence or notions of equity among population groups) may lead to alloctions that do not yield the maximum health benefit. The optimal allocation of HIV prevention funds in a population depends on HIV prevalence and incidence, the objective function, the production functions for the prevention programs, and other factors. Consideration of cost, equity, and social and political norms may be important when allocating HIV prevention funds. The model presented in this article can help decision makers determine the health consequences of different allocations of funds.

  13. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  14. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  15. Designing optimal food intake patterns to achieve nutritional goals for Japanese adults through the use of linear programming optimization models.

    Science.gov (United States)

    Okubo, Hitomi; Sasaki, Satoshi; Murakami, Kentaro; Yokoyama, Tetsuji; Hirota, Naoko; Notsu, Akiko; Fukui, Mitsuru; Date, Chigusa

    2015-06-06

    Simultaneous dietary achievement of a full set of nutritional recommendations is difficult. Diet optimization model using linear programming is a useful mathematical means of translating nutrient-based recommendations into realistic nutritionally-optimal food combinations incorporating local and culture-specific foods. We used this approach to explore optimal food intake patterns that meet the nutrient recommendations of the Dietary Reference Intakes (DRIs) while incorporating typical Japanese food selections. As observed intake values, we used the food and nutrient intake data of 92 women aged 31-69 years and 82 men aged 32-69 years living in three regions of Japan. Dietary data were collected with semi-weighed dietary record on four non-consecutive days in each season of the year (16 days total). The linear programming models were constructed to minimize the differences between observed and optimized food intake patterns while also meeting the DRIs for a set of 28 nutrients, setting energy equal to estimated requirements, and not exceeding typical quantities of each food consumed by each age (30-49 or 50-69 years) and gender group. We successfully developed mathematically optimized food intake patterns that met the DRIs for all 28 nutrients studied in each sex and age group. Achieving nutritional goals required minor modifications of existing diets in older groups, particularly women, while major modifications were required to increase intake of fruit and vegetables in younger groups of both sexes. Across all sex and age groups, optimized food intake patterns demanded greatly increased intake of whole grains and reduced-fat dairy products in place of intake of refined grains and full-fat dairy products. Salt intake goals were the most difficult to achieve, requiring marked reduction of salt-containing seasoning (65-80%) in all sex and age groups. Using a linear programming model, we identified optimal food intake patterns providing practical food choices and

  16. Postdoctoral periodontal program directors' perspectives of resident selection.

    Science.gov (United States)

    Khan, Saba; Carmosino, Andrew J; Yuan, Judy Chia-Chun; Lucchiari, Newton; Kawar, Nadia; Sukotjo, Cortino

    2015-02-01

    Applications for postdoctoral periodontal programs have recently increased. The National Board Dental Examinations (NBDE) has adopted a pass/fail format. The purpose of this study is to examine the criteria used by accredited postdoctoral periodontal programs in the United States to evaluate potential applicants. A secondary purpose was to determine whether the absence of NBDE scores would change program directors' selection process. Basic demographic information of the program directors was also collected. A questionnaire was sent to all 54 program directors of accredited postdoctoral periodontal programs in the United States. The raw data were compiled, descriptive analyses were performed, and results were tabulated and ranked when applicable. Thirty-five of 54 program directors (64.8%) responded to the survey. The five most important factors in selecting residents were: 1) interview ratings; 2) dental school clinical grades; 3) dental school periodontics grades; 4) personal statement; and 5) letters of recommendation. The majority of the programs (94%; n = 33) require an interview, and many (86%; n = 30) have a committee that makes the final decision on candidate acceptance. More than half of the respondents (56%; n = 17) stated that the pass/fail format of the NBDE would affect the decision-making process. This study describes the criteria used by postdoctoral periodontal programs to help select applicants. Interview ratings, dental school grades, personal statements, and letters of recommendation were found to be the most important factors. Results from this study may be helpful for prospective postdoctoral periodontal program applicants in the United States.

  17. Fetal programming of schizophrenia: select mechanisms.

    Science.gov (United States)

    Debnath, Monojit; Venkatasubramanian, Ganesan; Berk, Michael

    2015-02-01

    Mounting evidence indicates that schizophrenia is associated with adverse intrauterine experiences. An adverse or suboptimal fetal environment can cause irreversible changes in brain that can subsequently exert long-lasting effects through resetting a diverse array of biological systems including endocrine, immune and nervous. It is evident from animal and imaging studies that subtle variations in the intrauterine environment can cause recognizable differences in brain structure and cognitive functions in the offspring. A wide variety of environmental factors may play a role in precipitating the emergent developmental dysregulation and the consequent evolution of psychiatric traits in early adulthood by inducing inflammatory, oxidative and nitrosative stress (IO&NS) pathways, mitochondrial dysfunction, apoptosis, and epigenetic dysregulation. However, the precise mechanisms behind such relationships and the specificity of the risk factors for schizophrenia remain exploratory. Considering the paucity of knowledge on fetal programming of schizophrenia, it is timely to consolidate the recent advances in the field and put forward an integrated overview of the mechanisms associated with fetal origin of schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Expected value based fuzzy programming approach to solve integrated supplier selection and inventory control problem with fuzzy demand

    Science.gov (United States)

    Sutrisno; Widowati; Sunarsih; Kartono

    2018-01-01

    In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.

  19. A man in the loop trajectory optimization program (MILTOP)

    Science.gov (United States)

    Reinfields, J.

    1974-01-01

    An interactive trajectory optimization program is developed for use in initial fixing of launch configurations. The program is called MILTOP for Man-In-the-Loop-Trajectory Optimization-Program. The program is designed to facilitate quick look studies using man-machine decision combinations to reduce the time required to solve a given problem. MILTOP integrates the equations of motion of a point-mass in 3-Dimensions with drag as the only aerodynamic force present. Any point in time at which an integration step terminates, may be used as a decision-break-point, with complete user control over all variables and routines at this point. Automatic phases are provided for different modes of control: vertical rise, pitch-over, gravity turn, chi-freeze and control turn. Stage parameters are initialized from a separate routine so the user may fly as many stages as his problem demands. The MILTOP system uses both interactively on storage scope consoles, or in batch mode with numerical output on the live printer.

  20. Optimisation of selective breeding program for Nile tilapia (Oreochromis niloticus)

    NARCIS (Netherlands)

    Trong, T.Q.

    2013-01-01

    The aim of this thesis was to optimise the selective breeding program for Nile tilapia in the Mekong Delta region of Vietnam. Two breeding schemes, the “classic” BLUP scheme following the GIFT method (with pair mating) and a rotational mating scheme with own performance selection and

  1. Programming in the Zone: Repertoire Selection for the Large Ensemble

    Science.gov (United States)

    Hopkins, Michael

    2013-01-01

    One of the great challenges ensemble directors face is selecting high-quality repertoire that matches the musical and technical levels of their ensembles. Thoughtful repertoire selection can lead to increased student motivation as well as greater enthusiasm for the music program from parents, administrators, teachers, and community members. Common…

  2. A hybrid agent-based computational economics and optimization approach for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Zahra Pourabdollahi

    2017-12-01

    Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.

  3. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    Science.gov (United States)

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  4. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    Directory of Open Access Journals (Sweden)

    Qun Chen

    Full Text Available The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  5. Optimal timing of joint replacement using mathematical programming and stochastic programming models.

    Science.gov (United States)

    Keren, Baruch; Pliskin, Joseph S

    2011-12-01

    The optimal timing for performing radical medical procedures as joint (e.g., hip) replacement must be seriously considered. In this paper we show that under deterministic assumptions the optimal timing for joint replacement is a solution of a mathematical programming problem, and under stochastic assumptions the optimal timing can be formulated as a stochastic programming problem. We formulate deterministic and stochastic models that can serve as decision support tools. The results show that the benefit from joint replacement surgery is heavily dependent on timing. Moreover, for a special case where the patient's remaining life is normally distributed along with a normally distributed survival of the new joint, the expected benefit function from surgery is completely solved. This enables practitioners to draw the expected benefit graph, to find the optimal timing, to evaluate the benefit for each patient, to set priorities among patients and to decide if joint replacement should be performed and when.

  6. A Simulation Modeling Framework to Optimize Programs Using Financial Incentives to Motivate Health Behavior Change.

    Science.gov (United States)

    Basu, Sanjay; Kiernan, Michaela

    2016-01-01

    While increasingly popular among mid- to large-size employers, using financial incentives to induce health behavior change among employees has been controversial, in part due to poor quality and generalizability of studies to date. Thus, fundamental questions have been left unanswered: To generate positive economic returns on investment, what level of incentive should be offered for any given type of incentive program and among which employees? We constructed a novel modeling framework that systematically identifies how to optimize marginal return on investment from programs incentivizing behavior change by integrating commonly collected data on health behaviors and associated costs. We integrated "demand curves" capturing individual differences in response to any given incentive with employee demographic and risk factor data. We also estimated the degree of self-selection that could be tolerated: that is, the maximum percentage of already-healthy employees who could enroll in a wellness program while still maintaining positive absolute return on investment. In a demonstration analysis, the modeling framework was applied to data from 3000 worksite physical activity programs across the nation. For physical activity programs, the incentive levels that would optimize marginal return on investment ($367/employee/year) were higher than average incentive levels currently offered ($143/employee/year). Yet a high degree of self-selection could undermine the economic benefits of the program; if more than 17% of participants came from the top 10% of the physical activity distribution, the cost of the program would be expected to always be greater than its benefits. Our generalizable framework integrates individual differences in behavior and risk to systematically estimate the incentive level that optimizes marginal return on investment. © The Author(s) 2015.

  7. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    Science.gov (United States)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model

  8. Project Selection for NASA's R&D Programs

    Science.gov (United States)

    Jones, Harry

    2005-01-01

    The purpose of NASA s Research and Development (R&D) programs is to provide advanced human support technologies for the Exploration Systems Mission Directorate (ESMD). The new technologies must be sufficiently attractive and proven to be selectable for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are likely options for flight. The R&D programs must select an array of technology development projects, manage them, and either terminate or continue them, so as to maximize the delivered number of potentially usable advanced human support technologies. This paper proposes an effective project selection methodology to help manage NASA R&D project portfolios.

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth

  11. Exploiting variability for energy optimization of parallel programs

    Energy Technology Data Exchange (ETDEWEB)

    Lavrijsen, Wim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); de Jong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Xin [Georgia Inst. of Technology, Atlanta, GA (United States); Schwan, Karsten [Georgia Inst. of Technology, Atlanta, GA (United States)

    2016-04-18

    Here in this paper we present optimizations that use DVFS mechanisms to reduce the total energy usage in scientific applications. Our main insight is that noise is intrinsic to large scale parallel executions and it appears whenever shared resources are contended. The presence of noise allows us to identify and manipulate any program regions amenable to DVFS. When compared to previous energy optimizations that make per core decisions using predictions of the running time, our scheme uses a qualitative approach to recognize the signature of executions amenable to DVFS. By recognizing the "shape of variability" we can optimize codes with highly dynamic behavior, which pose challenges to all existing DVFS techniques. We validate our approach using offline and online analyses for one-sided and two-sided communication paradigms. We have applied our methods to NWChem, and we show best case improvements in energy use of 12% at no loss in performance when using online optimizations running on 720 Haswell cores with one-sided communication. With NWChem on MPI two-sided and offline analysis, capturing the initialization, we find energy savings of up to 20%, with less than 1% performance cost.

  12. Age-Related Differences in Goals: Testing Predictions from Selection, Optimization, and Compensation Theory and Socioemotional Selectivity Theory

    Science.gov (United States)

    Penningroth, Suzanna L.; Scott, Walter D.

    2012-01-01

    Two prominent theories of lifespan development, socioemotional selectivity theory and selection, optimization, and compensation theory, make similar predictions for differences in the goal representations of younger and older adults. Our purpose was to test whether the goals of younger and older adults differed in ways predicted by these two…

  13. Local beam angle optimization with linear programming and gradient search

    International Nuclear Information System (INIS)

    Craft, David

    2007-01-01

    The optimization of beam angles in IMRT planning is still an open problem, with literature focusing on heuristic strategies and exhaustive searches on discrete angle grids. We show how a beam angle set can be locally refined in a continuous manner using gradient-based optimization in the beam angle space. The gradient is derived using linear programming duality theory. Applying this local search to 100 random initial angle sets of a phantom pancreatic case demonstrates the method, and highlights the many-local-minima aspect of the BAO problem. Due to this function structure, we recommend a search strategy of a thorough global search followed by local refinement at promising beam angle sets. Extensions to nonlinear IMRT formulations are discussed. (note)

  14. Particle swarm optimization for programming deep brain stimulation arrays.

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D

    2017-02-01

    Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (⩽9.2%) and ROA (⩽1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n  =  3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations showed discrepancies

  15. Particle Swarm Optimization for Programming Deep Brain Stimulation Arrays

    Science.gov (United States)

    Peña, Edgar; Zhang, Simeng; Deyo, Steve; Xiao, YiZi; Johnson, Matthew D.

    2017-01-01

    Objective Deep brain stimulation (DBS) therapy relies on both precise neurosurgical targeting and systematic optimization of stimulation settings to achieve beneficial clinical outcomes. One recent advance to improve targeting is the development of DBS arrays (DBSAs) with electrodes segmented both along and around the DBS lead. However, increasing the number of independent electrodes creates the logistical challenge of optimizing stimulation parameters efficiently. Approach Solving such complex problems with multiple solutions and objectives is well known to occur in biology, in which complex collective behaviors emerge out of swarms of individual organisms engaged in learning through social interactions. Here, we developed a particle swarm optimization (PSO) algorithm to program DBSAs using a swarm of individual particles representing electrode configurations and stimulation amplitudes. Using a finite element model of motor thalamic DBS, we demonstrate how the PSO algorithm can efficiently optimize a multi-objective function that maximizes predictions of axonal activation in regions of interest (ROI, cerebellar-receiving area of motor thalamus), minimizes predictions of axonal activation in regions of avoidance (ROA, somatosensory thalamus), and minimizes power consumption. Main Results The algorithm solved the multi-objective problem by producing a Pareto front. ROI and ROA activation predictions were consistent across swarms (<1% median discrepancy in axon activation). The algorithm was able to accommodate for (1) lead displacement (1 mm) with relatively small ROI (≤9.2%) and ROA (≤1%) activation changes, irrespective of shift direction; (2) reduction in maximum per-electrode current (by 50% and 80%) with ROI activation decreasing by 5.6% and 16%, respectively; and (3) disabling electrodes (n=3 and 12) with ROI activation reduction by 1.8% and 14%, respectively. Additionally, comparison between PSO predictions and multi-compartment axon model simulations

  16. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  17. A first formal link between the price equation and an optimization program.

    Science.gov (United States)

    Grafen, Alan

    2002-07-07

    The Darwin unification project is pursued. A meta-model encompassing an important class of population genetic models is formed by adding an abstract model of the number of successful gametes to the Price equation under uncertainty. A class of optimization programs are defined to represent the "individual-as-maximizing-agent analogy" in a general way. It is then shown that for each population genetic model there is a corresponding optimization program with which formal links can be established. These links provide a secure logical foundation for the commonplace biological principle that natural selection leads organisms to act as if maximizing their "fitness", provides a definition of "fitness", and clarifies the limitations of that principle. The situations covered do not include frequency dependence or social behaviour, but the approach is capable of extension.

  18. The Optimal Portfolio Selection Model under g-Expectation

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    complicated and sophisticated, the optimal solution turns out to be surprisingly simple, the payoff of a portfolio of two binary claims. Also I give the economic meaning of my model and the comparison with that one in the work of Jin and Zhou, 2008.

  19. Optimal concentration of selective agents for inhibiting in vitro ...

    African Journals Online (AJOL)

    Alessandra

    (Murashige and Skoog, 1962) supplemented with 30 g/L sucrose, 3 mg/L 2 ... was adjusted to 5.8 ± 0.1 and autoclaved for 20 min at 121 ± 1°C. Ten seeds were ..... Takamori LM, Machado Neto NB, Vieira LGE, Ribas AF (2015). Optimization of ...

  20. Application of numerical modeling for optimization of selective hot ...

    African Journals Online (AJOL)

    Jane

    2011-08-29

    Aug 29, 2011 ... The term flavonoid is used for a class of plant chemicals known for their activity as highly potent ... In this research, a novel optimization-extraction method of taxifolin from .... tration identified and quantified by HPLC (data not.

  1. Self-Selection, Optimal Income Taxation, and Redistribution

    Science.gov (United States)

    Amegashie, J. Atsu

    2009-01-01

    The author makes a pedagogical contribution to optimal income taxation. Using a very simple model adapted from George A. Akerlof (1978), he demonstrates a key result in the approach to public economics and welfare economics pioneered by Nobel laureate James Mirrlees. He shows how incomplete information, in addition to the need to preserve…

  2. Selecting, adapting, and sustaining programs in health care systems

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-04-01

    Full Text Available Leah L Zullig,1,2 Hayden B Bosworth1–4 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University Medical Center, Durham, NC, USA; 3School of Nursing, 4Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA Abstract: Practitioners and researchers often design behavioral programs that are effective for a specific population or problem. Despite their success in a controlled setting, relatively few programs are scaled up and implemented in health care systems. Planning for scale-up is a critical, yet often overlooked, element in the process of program design. Equally as important is understanding how to select a program that has already been developed, and adapt and implement the program to meet specific organizational goals. This adaptation and implementation requires attention to organizational goals, available resources, and program cost. We assert that translational behavioral medicine necessitates expanding successful programs beyond a stand-alone research study. This paper describes key factors to consider when selecting, adapting, and sustaining programs for scale-up in large health care systems and applies the Knowledge to Action (KTA Framework to a case study, illustrating knowledge creation and an action cycle of implementation and evaluation activities. Keywords: program sustainability, diffusion of innovation, information dissemination, health services research, intervention studies 

  3. Mathematical programming model for heat exchanger design through optimization of partial objectives

    International Nuclear Information System (INIS)

    Onishi, Viviani C.; Ravagnani, Mauro A.S.S.; Caballero, José A.

    2013-01-01

    Highlights: • Rigorous design of shell-and-tube heat exchangers according to TEMA standards. • Division of the problem into sets of equations that are easier to solve. • Selected heuristic objective functions based on the physical behavior of the problem. • Sequential optimization approach to avoid solutions stuck in local minimum. • The results obtained with this model improved the values reported in the literature. - Abstract: Mathematical programming can be used for the optimal design of shell-and-tube heat exchangers (STHEs). This paper proposes a mixed integer non-linear programming (MINLP) model for the design of STHEs, following rigorously the standards of the Tubular Exchanger Manufacturers Association (TEMA). Bell–Delaware Method is used for the shell-side calculations. This approach produces a large and non-convex model that cannot be solved to global optimality with the current state of the art solvers. Notwithstanding, it is proposed to perform a sequential optimization approach of partial objective targets through the division of the problem into sets of related equations that are easier to solve. For each one of these problems a heuristic objective function is selected based on the physical behavior of the problem. The global optimal solution of the original problem cannot be ensured even in the case in which each of the sub-problems is solved to global optimality, but at least a very good solution is always guaranteed. Three cases extracted from the literature were studied. The results showed that in all cases the values obtained using the proposed MINLP model containing multiple objective functions improved the values presented in the literature

  4. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  5. Optimizing drilling performance using a selected drilling fluid

    Science.gov (United States)

    Judzis, Arnis [Salt Lake City, UT; Black, Alan D [Coral Springs, FL; Green, Sidney J [Salt Lake City, UT; Robertson, Homer A [West Jordan, UT; Bland, Ronald G [Houston, TX; Curry, David Alexander [The Woodlands, TX; Ledgerwood, III, Leroy W.

    2011-04-19

    To improve drilling performance, a drilling fluid is selected based on one or more criteria and to have at least one target characteristic. Drilling equipment is used to drill a wellbore, and the selected drilling fluid is provided into the wellbore during drilling with the drilling equipment. The at least one target characteristic of the drilling fluid includes an ability of the drilling fluid to penetrate into formation cuttings during drilling to weaken the formation cuttings.

  6. Parameter selection for the SSC trade-offs and optimization

    International Nuclear Information System (INIS)

    Edwards, D.A.; Syphers, M.J.

    1991-01-01

    In November of 1988, a site was selected in the state of Texas for the SSC. In January of 1989, the SSC Laboratory was established in Texas to adapt the design of the collider to the site and to manage the construction of the project. This paper describes the evolution of the SSC design since site selection, notes the increased concentration on the injector system, and addresses the rationale for choice of parameters

  7. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    International Nuclear Information System (INIS)

    Nguyen, Q H; Lang, V T; Choi, S B

    2015-01-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass. (technical note)

  8. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  9. An Algebraic Programming Style for Numerical Software and Its Optimization

    Directory of Open Access Journals (Sweden)

    T.B. Dinesh

    2000-01-01

    Full Text Available The abstract mathematical theory of partial differential equations (PDEs is formulated in terms of manifolds, scalar fields, tensors, and the like, but these algebraic structures are hardly recognizable in actual PDE solvers. The general aim of the Sophus programming style is to bridge the gap between theory and practice in the domain of PDE solvers. Its main ingredients are a library of abstract datatypes corresponding to the algebraic structures used in the mathematical theory and an algebraic expression style similar to the expression style used in the mathematical theory. Because of its emphasis on abstract datatypes, Sophus is most naturally combined with object-oriented languages or other languages supporting abstract datatypes. The resulting source code patterns are beyond the scope of current compiler optimizations, but are sufficiently specific for a dedicated source-to-source optimizer. The limited, domain-specific, character of Sophus is the key to success here. This kind of optimization has been tested on computationally intensive Sophus style code with promising results. The general approach may be useful for other styles and in other application domains as well.

  10. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2017-06-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  11. Optimizing selection of in vitro tests for diagnosing thyroid disorders

    International Nuclear Information System (INIS)

    Zwas, S.T.; Rosenblum, Yossef; Boruchowsky, Sabina

    1987-01-01

    The optimal utilization of the thyroid related radioimmunoassays T3, T4, and TSH-RIA is derived from analysing the clinical and laboratory data for 974 patients with functional thyroid disorders. A statistical computer analysis of the contribution of each of the three tests, and in combination, to the final diagnoses of hypothyroid, euthyroid, and hyper thyroid states was designed. The best contributing test for hypothyroidism and euthyroidism was TSH-RIA (98.5 and 93%, respectively). T4/T3+TSH-RIAs were the optimal dual combination for diagnosing euthyroidism (98.0%). For diagnosing hyperthyroidism T4-RIA was the best single test (82.5%) followed by T3 + T4 as an optimal dual combination (95%). Using all three tests was of no significant additional value over dual combinations. It is concluded that the work and cost of randomly performing three tests routinely is not justified without clinical basis. An algorithm is proposed to guide thyroid studies based on computer analyses of the above-mentioned single or dual-test combinations to establish accurate diagnosis at the lowest laboratory cost. (author)

  12. Egg-laying substrate selection for optimal camouflage by quail.

    Science.gov (United States)

    Lovell, P George; Ruxton, Graeme D; Langridge, Keri V; Spencer, Karen A

    2013-02-04

    Camouflage is conferred by background matching and disruption, which are both affected by microhabitat. However, microhabitat selection that enhances camouflage has only been demonstrated in species with discrete phenotypic morphs. For most animals, phenotypic variation is continuous; here we explore whether such individuals can select microhabitats to best exploit camouflage. We use substrate selection in a ground-nesting bird (Japanese quail, Coturnix japonica). For such species, threat from visual predators is high and egg appearance shows strong between-female variation. In quail, variation in appearance is particularly obvious in the amount of dark maculation on the light-colored shell. When given a choice, birds consistently selected laying substrates that made visual detection of their egg outline most challenging. However, the strategy for maximizing camouflage varied with the degree of egg maculation. Females laying heavily maculated eggs selected the substrate that more closely matched egg maculation color properties, leading to camouflage through disruptive coloration. For lightly maculated eggs, females chose a substrate that best matched their egg background coloration, suggesting background matching. Our results show that quail "know" their individual egg patterning and seek out a nest position that provides most effective camouflage for their individual phenotype. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids

    Energy Technology Data Exchange (ETDEWEB)

    Kuppannagari, Sanmukh R. [Univ. of Southern California, Los Angeles, CA (United States); Kannan, Rajgopal [Louisiana State Univ., Baton Rouge, LA (United States); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-12-07

    Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailment error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10-7 to 10-5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.

  14. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  15. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  16. Selection of optimal conditions for preparation of emulsified fuel fluids

    Science.gov (United States)

    Ivanov, V. A.; Berg, V. I.; Frolov, M. D.

    2018-05-01

    The aim of the article is to derive the optimal concept of physical and chemical effects, and its application to the production of water-fuel emulsions. The authors set a research task to attempt to estimate the influence of the surfactant concentration on such indicator as the time before the beginning of emulsion breaking. The analysis, based on experimental data, showed that an increase in the concentration of sodium lauryl sulfate is expedient to a certain point, corresponding to 0.05% of the total mass fraction. The main advantage of the model is a rational combination of methods of physical and chemical treatment used in the production of emulsions.

  17. A Novel Automatic Phase Selection Device: Design and Optimization

    Science.gov (United States)

    Zhang, Feng; Li, Haitao; Li, Na; Zhang, Nan; Lv, Wei; Cui, Xiaojiang

    2018-01-01

    At present, AICD completion is an effective way to slow down the bottom water cone. Effective extension of the period without water production. According on the basis of investigating the AICD both at home and abroad, this paper designed a new type of AICD, and with the help of fluid numerical simulation software, the internal flow field was analysed, and its structure is optimized. The simulation results show that the tool can restrict the flow of water well, and the flow of oil is less.

  18. Optimization of Storage Parameters of Selected Fruits in Passive ...

    African Journals Online (AJOL)

    This study was carried out to determine the optimum storage parameters of selected fruit using three sets of four types of passive evaporative cooling structures made of two different materials clay and aluminium. One set consisted of four separate cooling chambers. Two cooling chambers were made with aluminium ...

  19. Optimal Contractor Selection in Construction Industry: The Fuzzy Way

    Science.gov (United States)

    Krishna Rao, M. V.; Kumar, V. S. S.; Rathish Kumar, P.

    2018-02-01

    A purely price-based approach to contractor selection has been identified as the root cause for many serious project delivery problems. Therefore, the capability of the contractor to execute the project should be evaluated using a multiple set of selection criteria including reputation, past performance, performance potential, financial soundness and other project specific criteria. An industry-wide questionnaire survey was conducted with the objective of identifying the important criteria for adoption in the selection process. In this work, a fuzzy set based model was developed for contractor prequalification/evaluation, by using effective criteria obtained from the percept of construction professionals, taking subjective judgments of decision makers also into consideration. A case study consisting of four alternatives (contractors in the present case) solicited from a public works department of Pondicherry in India, is used to illustrate the effectiveness of the proposed approach. The final selection of contractor is made based on the integrated score or Overall Evaluation Score of the decision alternative in prequalification as well as bid evaluation stages.

  20. Cardiac resynchronization therapy : advances in optimal patient selection

    NARCIS (Netherlands)

    Bleeker, Gabe Berend

    2007-01-01

    Despite the impressive results of cardiac resynchronization theraphy (CRT) in recent large randomized trials a consistent number of patients fails to improve following CRT implantation when the established CRT selection criteria (NYHA class III-IV heart failure, LV ejection fraction ≤35 % and QRS

  1. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  2. Mahalanobis distance and variable selection to optimize dose response

    International Nuclear Information System (INIS)

    Moore, D.H. II; Bennett, D.E.; Wyrobek, A.J.; Kranzler, D.

    1979-01-01

    A battery of statistical techniques are combined to improve detection of low-level dose response. First, Mahalanobis distances are used to classify objects as normal or abnormal. Then the proportion classified abnormal is regressed on dose. Finally, a subset of regressor variables is selected which maximizes the slope of the dose response line. Use of the techniques is illustrated by application to mouse sperm damaged by low doses of x-rays

  3. Fast Branch & Bound algorithms for optimal feature selection

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Pudil, Pavel; Kittler, J.

    2004-01-01

    Roč. 26, č. 7 (2004), s. 900-912 ISSN 0162-8828 R&D Projects: GA ČR GA402/02/1271; GA ČR GA402/03/1310; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : subset search * feature selection * search tree Subject RIV: BD - Theory of Information Impact factor: 4.352, year: 2004

  4. Iteration particle swarm optimization for contract capacities selection of time-of-use rates industrial customers

    International Nuclear Information System (INIS)

    Lee, Tsung-Ying; Chen, Chun-Lung

    2007-01-01

    This paper presents a new algorithm for solving the optimal contract capacities of a time-of-use (TOU) rates industrial customer. This algorithm is named iteration particle swarm optimization (IPSO). A new index, called iteration best is incorporated into particle swarm optimization (PSO) to improve solution quality and computation efficiency. Expanding line construction cost and contract recovery cost are considered, as well as demand contract capacity cost and penalty bill, in the selection of the optimal contract capacities. The resulting optimal contract capacity effectively reaches the minimum electricity charge of TOU rates users. A significant reduction in electricity costs is observed. The effects of expanding line construction cost and contract recovery cost on the selection of optimal contract capacities can also be estimated. The feasibility of the new algorithm is demonstrated by a numerical example, and the IPSO solution quality and computation efficiency are compared to those of other algorithms

  5. Program management aid for redundancy selection and operational guidelines

    Science.gov (United States)

    Hodge, P. W.; Davis, W. L.; Frumkin, B.

    1972-01-01

    Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.

  6. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p human-robot collision) and the differences in the lexicons are analyzed.

  7. Optimal Multi-Interface Selection for Mobile Video Streaming in Efficient Battery Consumption and Data Usage

    Directory of Open Access Journals (Sweden)

    Seonghoon Moon

    2016-01-01

    Full Text Available With the proliferation of high-performance, large-screen mobile devices, users’ expectations of having access to high-resolution video content in smooth network environments are steadily growing. To guarantee such stable streaming, a high cellular network bandwidth is required; yet network providers often charge high prices for even limited data plans. Moreover, the costs of smoothly streaming high-resolution videos are not merely monetary; the device’s battery life must also be accounted for. To resolve these problems, we design an optimal multi-interface selection system for streaming video over HTTP/TCP. An optimization problem including battery life and LTE data constraints is derived and then solved using binary integer programming. Additionally, the system is designed with an adoption of split-layer scalable video coding, which provides direct adaptations of video quality and prevents out-of-order packet delivery problems. The proposed system is evaluated using a prototype application in a real, iOS-based device as well as through experiments conducted in heterogeneous mobile scenarios. Results show that the system not only guarantees the highest-possible video quality, but also prevents reckless consumption of LTE data and battery life.

  8. Aether: leveraging linear programming for optimal cloud computing in genomics.

    Science.gov (United States)

    Luber, Jacob M; Tierney, Braden T; Cofer, Evan M; Patel, Chirag J; Kostic, Aleksandar D

    2018-05-01

    Across biology, we are seeing rapid developments in scale of data production without a corresponding increase in data analysis capabilities. Here, we present Aether (http://aether.kosticlab.org), an intuitive, easy-to-use, cost-effective and scalable framework that uses linear programming to optimally bid on and deploy combinations of underutilized cloud computing resources. Our approach simultaneously minimizes the cost of data analysis and provides an easy transition from users' existing HPC pipelines. Data utilized are available at https://pubs.broadinstitute.org/diabimmune and with EBI SRA accession ERP005989. Source code is available at (https://github.com/kosticlab/aether). Examples, documentation and a tutorial are available at http://aether.kosticlab.org. chirag_patel@hms.harvard.edu or aleksandar.kostic@joslin.harvard.edu. Supplementary data are available at Bioinformatics online.

  9. ARSTEC, Nonlinear Optimization Program Using Random Search Method

    International Nuclear Information System (INIS)

    Rasmuson, D. M.; Marshall, N. H.

    1979-01-01

    1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays

  10. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  11. Designing, programming, and optimizing a (small) quantum computer

    Science.gov (United States)

    Svore, Krysta

    In 1982, Richard Feynman proposed to use a computer founded on the laws of quantum physics to simulate physical systems. In the more than thirty years since, quantum computers have shown promise to solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale classical machine. The practical realization of a quantum computer requires understanding and manipulating subtle quantum states while experimentally controlling quantum interference. It also requires an end-to-end software architecture for programming, optimizing, and implementing a quantum algorithm on the quantum device hardware. In this talk, we will introduce recent advances in connecting abstract theory to present-day real-world applications through software. We will highlight recent advancement of quantum algorithms and the challenges in ultimately performing a scalable solution on a quantum device.

  12. Developing optimal nurses work schedule using integer programming

    Science.gov (United States)

    Shahidin, Ainon Mardhiyah; Said, Mohd Syazwan Md; Said, Noor Hizwan Mohamad; Sazali, Noor Izatie Amaliena

    2017-08-01

    Time management is the art of arranging, organizing and scheduling one's time for the purpose of generating more effective work and productivity. Scheduling is the process of deciding how to commit resources between varieties of possible tasks. Thus, it is crucial for every organization to have a good work schedule for their staffs. The job of Ward nurses at hospitals runs for 24 hours every day. Therefore, nurses will be working using shift scheduling. This study is aimed to solve the nurse scheduling problem at an emergency ward of a private hospital. A 7-day work schedule for 7 consecutive weeks satisfying all the constraints set by the hospital will be developed using Integer Programming. The work schedule for the nurses obtained gives an optimal solution where all the constraints are being satisfied successfully.

  13. Optimization-Based Selection of Influential Agents in a Rural Afghan Social Network

    Science.gov (United States)

    2010-06-01

    nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent assignment strategy producing the greatest...leader social network, and 3) the nonlethal targeting model, a nonlinear programming ( NLP ) optimization formulation that identifies the k US agent...NATO Coalition in Afghanistan. 55 for Afghanistan ( [54], [31], [48], [55], [30]). While Arab tribes tend to be more hierarchical, Pashtun tribes are

  14. Optimization of refinery product blending by using linear programming

    International Nuclear Information System (INIS)

    Ristikj, Julija; Tripcheva-Trajkovska, Loreta; Rikaloski, Ice; Markovska, Liljana

    1999-01-01

    The product slate of a simple refinery consists mainly of liquefied petroleum gas, leaded and unleaded gasoline, jet fuel, diesel fuel, extra light heating oil and fuel oil. The quality of the oil products (fuels) for sale has to comply with the adopted standards for liquid fuels, and the produced quantities have to be comply with the market needs. The oil products are manufactured by blending two or more different fractions which quantities and physical-chemical properties depend on the crude oil type, the way and conditions of processing, and at the same time the fractions are used to blend one or more products. It is in producer's interest to do the blending in an optimal way, namely, to satisfy the requirements for the oil products quality and quantity with a maximal usage of the available fractions and, of course, with a maximal profit out of the sold products. This could be accomplished by applying linear programming, that is by using a linear model for oil products blending optimization. (Author)

  15. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  16. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  17. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  18. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    Science.gov (United States)

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  19. Multicriteria analysis in selecting the optimal variant of solar system

    Directory of Open Access Journals (Sweden)

    Radziejowska Aleksandra

    2016-01-01

    Full Text Available Alternative energy sources are becoming more serious competition to traditional ways of generating energy. It becomes real integration of eco-energy with ecology, as well as the innovative technologies with low-energy construction. Apart from the cost an important issue are technical parameters of the equipment, durability, ease of installation, etc. The investor therefore is facing with the problem of decision-making to choose the best solution from the point of view of many criteria. In the article, the authors present the proposal to apply the methods of multi-criteria analysis to select the most beneficial variant of the solar system solutions. In this purpose will be use among other method: multivariate analysis of Saaty’s AHP, the taxonomic method of weighting factors and, belonging to a group of methods using outranking relations, the Promethee II method. Proposed comparative analysis can be used as a method for decision support during the selection of the most beneficial technological solution of solar installation and to evaluate operational efficiency existing buildings which will have implemented new systems.

  20. The optimal hormonal replacement modality selection for multiple organ procurement from brain-dead organ donors

    Directory of Open Access Journals (Sweden)

    Mi Z

    2014-12-01

    Full Text Available Zhibao Mi,1 Dimitri Novitzky,2 Joseph F Collins,1 David KC Cooper3 1Cooperative Studies Program Coordinating Center, VA Maryland Health Care Systems, Perry Point, MD, USA; 2Department of Cardiothoracic Surgery, University of South Florida, Tampa, FL, USA; 3Thomas E Starzl Transplantation Institute, University of Pittsburgh, Pittsburgh, PA, USA Abstract: The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA, are statistically conservative. Hsu’s multiple comparisons with the best (MCB – adapted from the Dunnett’s multiple comparisons with control (MCC – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM or generalized linear mixed models (GLMM, and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS, among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. Keywords: best treatment selection, brain-dead organ donors, hormonal replacement, multiple binary endpoints, organ procurement, multiple comparisons

  1. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry

    Directory of Open Access Journals (Sweden)

    Joos Thomas

    2010-06-01

    Full Text Available Abstract Background Mass spectrometry (MS based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. Results We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. Conclusions For small datasets (a few hundred proteins it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  2. Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.

    Science.gov (United States)

    Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas

    2010-06-25

    Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.

  3. Optimizing weight control in diabetes: antidiabetic drug selection

    Directory of Open Access Journals (Sweden)

    S Kalra

    2010-08-01

    Full Text Available S Kalra1, B Kalra1, AG Unnikrishnan2, N Agrawal3, S Kumar41Bharti Hospital, Karnal; 2Amrita Institute of Medical Science, Kochi; 3Medical College, Gwalior; 4Excel Life Sciences, Noida, IndiaDate of preparation: 18th August 2010Conflict of interest: SK has received speaker fees from Novo Nordisk, sanofi-aventis, MSD, Eli Lilly, BMS, and AstraZeneca.Clinical question: Which antidiabetic drugs provide optimal weight control in patients with type 2 diabetes?Results: Metformin reduces weight gain, and may cause weight loss, when given alone or in combination with other drugs. Pioglitazone and rosiglitazone use is associated with weight gain. Use of the glucagon-like peptide-1 (GLP-1 analogs, liraglutide and exenatide, is associated with weight loss. Dipeptidyl peptidase-4 (DPP-4 inhibitors are considered weight-neutral. Results with insulin therapy are conflicting. Insulin detemir provides weight control along with glycemic control.Implementation: • Weight gain is considered an inevitable part of good glycemic control using conventional modalities of treatment such as sulfonylureas.• Use of metformin, weight-sparing insulin analogs such as insulin detemir, and liraglutide, should be encouraged as monotherapy, or in combination with other drugs.Keywords: weight control, diabetes

  4. The sequence relay selection strategy based on stochastic dynamic programming

    Science.gov (United States)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  5. SELECTION OF CHEMICAL TREATMENT PROGRAM FOR OILY WASTEWATER

    Directory of Open Access Journals (Sweden)

    Miguel Díaz

    2017-04-01

    Full Text Available When selecting a chemical treatment program for wastewater to achieve an effective flocculation and coagulation is crucial to understand how individual colloids interact. The coagulation process requires a rapid mixing while flocculation process needs a slow mixing. The behavior of colloids in water is strongly influenced by the electrokinetic charge, where each colloidal particle carries its own charge, which in its nature is usually negative. Polymers, which are long chains of high molecular weight and high charge, when added to water begin to form longer chains, allowing removing numerous particles of suspended matter. A study of physico-chemical treatment by addition of coagulant and flocculant was carried out in order to determine a chemical program for oily wastewater coming from the gravity separation process in a crude oil refinery. The tests were carried out in a Jar Test equipment, where commercial products: aluminum polychloride (PAC, aluminum sulfate and Sintec D50 were evaluated with five different flocculants. The selected chemical program was evaluated with fluids at three temperatures to know its sensitivity to this parameter and the mixing energy in the coagulation and flocculation. The chemical program and operational characteristics for physico-chemical treatment with PAC were determined, obtaining a removal of more than 93% for suspended matter and 96% for total hydrocarbons for the selected coagulant / flocculant combination.

  6. Discrepancies between selected Pareto optimal plans and final deliverable plans in radiotherapy multi-criteria optimization.

    Science.gov (United States)

    Kyroudi, Archonteia; Petersson, Kristoffer; Ghandour, Sarah; Pachoud, Marc; Matzinger, Oscar; Ozsahin, Mahmut; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-08-01

    Multi-criteria optimization provides decision makers with a range of clinical choices through Pareto plans that can be explored during real time navigation and then converted into deliverable plans. Our study shows that dosimetric differences can arise between the two steps, which could compromise the clinical choices made during navigation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  8. Dynamic Programming Optimization of Multi-rate Multicast Video-Streaming Services

    Directory of Open Access Journals (Sweden)

    Nestor Michael Caños Tiglao

    2010-06-01

    Full Text Available In large scale IP Television (IPTV and Mobile TV distributions, the video signal is typically encoded and transmitted using several quality streams, over IP Multicast channels, to several groups of receivers, which are classified in terms of their reception rate. As the number of video streams is usually constrained by both the number of TV channels and the maximum capacity of the content distribution network, it is necessary to find the selection of video stream transmission rates that maximizes the overall user satisfaction. In order to efficiently solve this problem, this paper proposes the Dynamic Programming Multi-rate Optimization (DPMO algorithm. The latter was comparatively evaluated considering several user distributions, featuring different access rate patterns. The experimental results reveal that DPMO is significantly more efficient than exhaustive search, while presenting slightly higher execution times than the non-optimal Multi-rate Step Search (MSS algorithm.

  9. C-program LINOP for the evaluation of film dosemeters by linear optimization. User manual

    International Nuclear Information System (INIS)

    Kragh, P.

    1995-11-01

    Linear programming results in an optimal measuring value for film dosemeters. The Linop program was developed to be used for linear programming. The program permits the evaluation and control of film dosemeters and of all other multi-component dosemeters. This user manual for the Linop program contains the source program, a description of the program and installation and use instructions. The data sets with programs and examples are available upon request. (orig.) [de

  10. An Improved Particle Swarm Optimization for Selective Single Machine Scheduling with Sequence Dependent Setup Costs and Downstream Demands

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper investigates a special single machine scheduling problem derived from practical industries, namely, the selective single machine scheduling with sequence dependent setup costs and downstream demands. Different from traditional single machine scheduling, this problem further takes into account the selection of jobs and the demands of downstream lines. This problem is formulated as a mixed integer linear programming model and an improved particle swarm optimization (PSO is proposed to solve it. To enhance the exploitation ability of the PSO, an adaptive neighborhood search with different search depth is developed based on the decision characteristics of the problem. To improve the search diversity and make the proposed PSO algorithm capable of getting out of local optimum, an elite solution pool is introduced into the PSO. Computational results based on extensive test instances show that the proposed PSO can obtain optimal solutions for small size problems and outperform the CPLEX and some other powerful algorithms for large size problems.

  11. Combinatorial algebraic geometry selected papers from the 2016 apprenticeship program

    CERN Document Server

    Sturmfels, Bernd

    2017-01-01

    This volume consolidates selected articles from the 2016 Apprenticeship Program at the Fields Institute, part of the larger program on Combinatorial Algebraic Geometry that ran from July through December of 2016. Written primarily by junior mathematicians, the articles cover a range of topics in combinatorial algebraic geometry including curves, surfaces, Grassmannians, convexity, abelian varieties, and moduli spaces. This book bridges the gap between graduate courses and cutting-edge research by connecting historical sources, computation, explicit examples, and new results.

  12. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    International Nuclear Information System (INIS)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles

  13. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  14. Review. Promises, pitfalls and challenges of genomic selection in breeding programs

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez-Escriche, N.; Gonzalez-Recio, O.

    2011-07-01

    The aim of this work was to review the main challenges and pitfalls of the implementation of genomic selection in the breeding programs of different livestock species. Genomic selection is now one of the main challenges in animal breeding and genetics. Its application could considerably increase the genetic gain in traits of interest. However, the success of its practical implementation depends on the selection scheme characteristics, and these must be studied for each particular case. In dairy cattle, especially in Holsteins, genomic selection is a reality. However, in other livestock species (beef cattle, small ruminants, monogastrics and fish) genomic selection has mainly been used experimentally. The main limitation for its implementation in the mentioned livestock species is the high geno typing costs compared to the low selection value of the candidate. Nevertheless, nowadays the possibility of using single-nucleotide polymorphism (SNP) chips of low density to make genomic selection applications economically feasible is under study. Economic studies may optimize the benefits of genomic selection (GS) to include new traits in the breeding goals. It is evident that genomic selection offers great potential; however, a suitable geno typing strategy and recording system for each case is needed in order to properly exploit it. (Author) 50 refs.

  15. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    Science.gov (United States)

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  16. Optimization of selective exposure radiography of the chest

    International Nuclear Information System (INIS)

    Naimuddin, S.

    1986-01-01

    A major technical limitation in conventional chest radiography is the mismatch of the x-ray transmission dynamic range with the useful exposure range of a radiographic film when a patient is presented with a uniform incident exposure field. The goal of this project is to develop a faster and more reliable selective exposure system to fabricate and position a compensating filter (or digital beam attenuator, DBA) for clinical use. The essential components of this system include a dose efficient test-image detector, a special purpose field grabber (image memory), a custom made fast printer, a transport channel, and a computer. The fabrication process begins with acquisition of a 64 x 64 format low-dose patient image which undergoes corrections for detector nonuniformity and scatter. The corrected data after log transformation are used to calculate thickness of filter material needed to compensate for the image dynamic range. Using this thickness information the computer controls the printer which fabricates as attenuator by overprinting multiple layers of cerium oxide on a 35 mm film substrate. Although the images are acquired in a 64 x 64 format, the attenuator is constructed in a dithered 16 x 16 format using a special algorithm. After fabrication, the attenuator is automatically conveyed through the transport channel and is positioned in the x-ray beam between the collimator and x-ray tube before the final compensated radiograph is taken

  17. A fuzzy multi-criteria decision model for integrated suppliers selection and optimal order allocation in the green supply chain

    Directory of Open Access Journals (Sweden)

    Hamzeh Amin-Tahmasbi

    2018-09-01

    Full Text Available Today, with the advancement of technology in the production process of various products, the achievement of sustainable production and development has become one of the main concerns of factories and manufacturing organizations. In the same vein, many manufacturers try to select suppliers in their upstream supply chains that have the best performance in terms of sustainable development criteria. In this research, a new multi-criteria decision-making model for selecting suppliers and assigning orders in the green supply chain is presented with a fuzzy optimization approach. Due to uncertainty in supplier capacity as well as customer demand, the problem is formulated as a fuzzy multi-objective linear programming (FMOLP. The proposed model for the selection of suppliers of SAPCO Corporation is evaluated. Firstly, in order to select and rank suppliers in a green supply chain, a network structure of criteria has defined with five main criteria of cost, quality, delivery, technology and environmental benefits. Subsequently, using incomplete fuzzy linguistic relationships, pair-wise comparisons between the criteria and sub-criteria as well as the operation of the options will be assessed. The results of these comparisons rank the existing suppliers in terms of performance and determine the utility of them. The output of these calculations (utility index is used in the optimization model. Subsequently, in the order allocation process, the two functions of the target cost of purchase and purchase value are optimized simultaneously. Finally, the order quantity is determined for each supplier in each period.

  18. Structural optimization of static power control programs of nuclear power plants with WWER-1000

    International Nuclear Information System (INIS)

    Kokol, E.O.

    2015-01-01

    The question of possibility the power control programs switching for WWER-1000 is considered. The aim of this research is to determine the best program for the power control of nuclear reactor under cyclic diurnal behavior of electrical generation, as well as the switching implementation. The considered problem of finding the best control program refers to the multicriteria optimization class of problems. Operation of the nuclear power generation system simulated using the following power control programs: with constant average temperature of transfer fluid, with constant pressure in the reactor secondary circuit, with constant temperature in input of the nuclear reactor. The target function was proposed. It consists of three normalized criteria: the burn up fraction, the damage level of fuel rod array shells, as well as changes in the power values. When simulation of the nuclear power generation system operation within the life was done, the values of the selected criteria were obtained and inserted in the target function. The minimum of three values of the target function depending on the control program at current time defined the criterion of switching of considered static power control programs for nuclear power generation system

  19. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators

    International Nuclear Information System (INIS)

    Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc

    2005-01-01

    Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter

  20. A multiobjective optimization model for optimal supplier selection in multiple sourcing environment

    Directory of Open Access Journals (Sweden)

    M. K. Mehlawat

    2014-06-01

    Full Text Available Supplier selection is an important concern of a firm’s competitiveness, more so in the context of the imperative of supply-chain management. In this paper, we use an approach to a multiobjective supplier selection problem in which the emphasis is on building supplier portfolios. The supplier evaluation and order allocation is based upon the criteria of expected unit price, expected score of quality and expected score of delivery. A fuzzy approach is proposed that relies on nonlinear S-shape membership functions to generate different efficient supplier portfolios. Numerical experiments conducted on a data set of a multinational company are provided to demonstrate the applicability and efficiency of the proposed approach to real-world applications of supplier selection

  1. Differential Spatio-temporal Multiband Satellite Image Clustering using K-means Optimization With Reinforcement Programming

    Directory of Open Access Journals (Sweden)

    Irene Erlyn Wina Rachmawan

    2015-06-01

    Full Text Available Deforestration is one of the crucial issues in Indonesia because now Indonesia has world's highest deforestation rate. In other hand, multispectral image delivers a great source of data for studying spatial and temporal changeability of the environmental such as deforestration area. This research present differential image processing methods for detecting nature change of deforestration. Our differential image processing algorithms extract and indicating area automatically. The feature of our proposed idea produce extracted information from multiband satellite image and calculate the area of deforestration by years with calculating data using temporal dataset. Yet, multiband satellite image consists of big data size that were difficult to be handled for segmentation. Commonly, K- Means clustering is considered to be a powerfull clustering algorithm because of its ability to clustering big data. However K-Means has sensitivity of its first generated centroids, which could lead into a bad performance. In this paper we propose a new approach to optimize K-Means clustering using Reinforcement Programming in order to clustering multispectral image. We build a new mechanism for generating initial centroids by implementing exploration and exploitation knowledge from Reinforcement Programming. This optimization will lead a better result for K-means data cluster. We select multispectral image from Landsat 7 in past ten years in Medawai, Borneo, Indonesia, and apply two segmentation areas consist of deforestration land and forest field. We made series of experiments and compared the experimental results of K-means using Reinforcement Programming as optimizing initiate centroid and normal K-means without optimization process. Keywords: Deforestration, Multispectral images, landsat, automatic clustering, K-means.

  2. Decision-making methodology of optimal shielding materials by using fuzzy linear programming

    International Nuclear Information System (INIS)

    Kanai, Y.; Miura, T.; Hirao, Y.

    2000-01-01

    The main purpose of our studies are to select materials and determine the ratio of constituent materials as the first stage of optimum shielding design to suit the individual requirements of nuclear reactors, reprocessing facilities, casks for shipping spent fuel, etc. The parameters of the shield optimization are cost, space, weight and some shielding properties such as activation rates or individual irradiation and cooling time, and total dose rate for neutrons (including secondary gamma ray) and for primary gamma ray. Using conventional two-valued logic (i.e. crisp) approaches, huge combination calculations are needed to identify suitable materials for optimum shielding design. Also, re-computation is required for minor changes, as the approach does not react sensitively to the computation result. Present approach using a fuzzy linear programming method is much of the decision-making toward the satisfying solution might take place in fuzzy environment. And it can quickly and easily provide a guiding principle of optimal selection of shielding materials under the above-mentioned conditions. The possibility or reducing radiation effects by optimizing the ratio of constituent materials is investigated. (author)

  3. Optimal control of a programmed motion of a rigid spacecraft using redundant kinematics parameterizations

    International Nuclear Information System (INIS)

    El-Gohary, Awad

    2005-01-01

    This paper considers the problem of optimal controlling of a programmed motion of a rigid spacecraft. Given a cost of the spacecraft as a quadratic function of state and control variables we seek for optimal control laws as functions of the state variables and the angle of programmed rotation that minimize this cost and asymptotically stabilize the required programmed motion. The stabilizing properties of the proposed controllers are proved using the optimal Liapunov techniques. Numerical simulation study is presented

  4. Selection criteria of residents for residency programs in Kuwait.

    Science.gov (United States)

    Marwan, Yousef; Ayed, Adel

    2013-01-19

    In Kuwait, 21 residency training programs were offered in the year 2011; however, no data is available regarding the criteria of selecting residents for these programs. This study aims to provide information about the importance of these criteria. A self-administered questionnaire was used to collect data from members (e.g. chairmen, directors, assistants …etc.) of residency programs in Kuwait. A total of 108 members were invited to participate. They were asked to rate the importance level (scale from 1 to 5) of criteria that may affect the acceptance of an applicant to their residency programs. Average scores were calculated for each criterion. Of the 108 members invited to participate, only 12 (11.1%) declined to participate. Interview performance was ranked as the most important criteria for selecting residents (average score: 4.63/5.00), followed by grade point average (average score: 3.78/5.00) and honors during medical school (average score: 3.67/5.00). On the other hand, receiving disciplinary action during medical school and failure in a required clerkship were considered as the most concerning among other criteria used to reject applicants (average scores: 3.83/5.00 and 3.54/5.00 respectively). Minor differences regarding the importance level of each criterion were noted across different programs. This study provided general information about the criteria that are used to accept/reject applicants to residency programs in Kuwait. Future studies should be conducted to investigate each criterion individually, and to assess if these criteria are related to residents' success during their training.

  5. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning

    International Nuclear Information System (INIS)

    Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan

    2005-01-01

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated

  6. Crashworthiness design optimization using multipoint sequential linear programming

    NARCIS (Netherlands)

    Etman, L.F.P.; Adriaens, J.M.T.A.; Slagmaat, van M.T.P.; Schoofs, A.J.G.

    1996-01-01

    A design optimization tool has been developed for the crash victim simulation software MADYMO. The crash worthiness optimization problem is characterized by a noisy behaviour of objective function and constraints. Additionally, objective function and constraint values follow from a computationally

  7. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  8. Selection on Optimal Haploid Value Increases Genetic Gain and Preserves More Genetic Diversity Relative to Genomic Selection

    OpenAIRE

    Daetwyler, Hans D.; Hayden, Matthew J.; Spangenberg, German C.; Hayes, Ben J.

    2015-01-01

    Doubled haploids are routinely created and phenotypically selected in plant breeding programs to accelerate the breeding cycle. Genomic selection, which makes use of both phenotypes and genotypes, has been shown to further improve genetic gain through prediction of performance before or without phenotypic characterization of novel germplasm. Additional opportunities exist to combine genomic prediction methods with the creation of doubled haploids. Here we propose an extension to genomic selec...

  9. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  10. Noninvasive imaging systems for gametes and embryo selection in IVF programs: a review.

    Science.gov (United States)

    Omidi, Marjan; Faramarzi, Azita; Agharahimi, Azam; Khalili, Mohammad Ali

    2017-09-01

    Optimizing the efficiency of the in vitro fertilization procedure by improving pregnancy rates and reducing the risks of multiple pregnancies simultaneously are the primary goals of the current assisted reproductive technology program. With the move to single embryo transfers, the need for more cost-effective and noninvasive methods for embryo selection prior to transfer is paramount. These aims require advancement in a more acquire gametes/embryo testing and selection procedures using high-tech devices. Therefore, the aim of the present review is to evaluate the efficacy of noninvasive imaging systems in the current literatures, focusing on the potential clinical application in infertile patients undergoing assisted reproductive technology treatments. In this regards, three advanced imaging systems of motile sperm organelle morphology examination, polarization microscopy and time-lapse monitoring for the best selection of the gametes and preimplantation embryos are introduced in full. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  11. Analysis of industrial pollution prevention programs in selected Asian countries

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, S.Y. [Argonne National Lab., IL (United States). Environmental Assessment Div.]|[East-West Center, Honolulu, HI (United States)

    1995-05-01

    Industrialization in developing countries is causing increasing environmental damage. Pollution prevention (P2) is an emerging environmental concept that could help developing countries achieve leapfrog goals, bypassing old and pollutive technologies and minimizing traditional control practices. The current P2 promotion activities in Hong Kong, the Republic of Korea, the Philippines, Singapore, Taiwan, and Thailand are discussed. These programs, generally initiated in the last 5 years, are classified into five categories: awareness promotion, education and training, information transfer, technical assistance, and financial incentives. All important at the early stages of P2 promotion, these programs should inform industries of the benefits of P2 and help them identify applicable P2 measures. Participation in these programs is voluntary. The limited data indicate that adoption of P2 measures in these countries is not yet widespread. Recommendations for expanding P2 promotion activities include (1) strengthening the design and enforcement of environmental regulations; (2) providing P2 training and education to government workers, nongovernmental organizations and labor unions officials, university faculties, and news media; (3) tracking the progress of P2 programs; (4) implementing selected P2 mandatory measures; (5) identifying cleaner production technologies for use in new facilities; (6) implementing special programs for small and medium enterprises; and (7) expanding P2 promotion to other sectors, such as agriculture and transportation, and encouraging green design and green consumerism.

  12. Optimization with PDE constraints ESF networking program 'OPTPDE'

    CERN Document Server

    2014-01-01

    This book on PDE Constrained Optimization contains contributions on the mathematical analysis and numerical solution of constrained optimal control and optimization problems where a partial differential equation (PDE) or a system of PDEs appears as an essential part of the constraints. The appropriate treatment of such problems requires a fundamental understanding of the subtle interplay between optimization in function spaces and numerical discretization techniques and relies on advanced methodologies from the theory of PDEs and numerical analysis as well as scientific computing. The contributions reflect the work of the European Science Foundation Networking Programme ’Optimization with PDEs’ (OPTPDE).

  13. MULTI-CRITERIA PROGRAMMING METHODS AND PRODUCTION PLAN OPTIMIZATION PROBLEM SOLVING IN METAL INDUSTRY

    OpenAIRE

    Tunjo Perić; Željko Mandić

    2017-01-01

    This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method) in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained resul...

  14. Portfolios with fuzzy returns: Selection strategies based on semi-infinite programming

    Science.gov (United States)

    Vercher, Enriqueta

    2008-08-01

    This paper provides new models for portfolio selection in which the returns on securities are considered fuzzy numbers rather than random variables. The investor's problem is to find the portfolio that minimizes the risk of achieving a return that is not less than the return of a riskless asset. The corresponding optimal portfolio is derived using semi-infinite programming in a soft framework. The return on each asset and their membership functions are described using historical data. The investment risk is approximated by mean intervals which evaluate the downside risk for a given fuzzy portfolio. This approach is illustrated with a numerical example.

  15. Selection of an optimal neural network architecture for computer-aided detection of microcalcifications - Comparison of automated optimization techniques

    International Nuclear Information System (INIS)

    Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas

    2001-01-01

    Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost

  16. Selective Optimization

    Science.gov (United States)

    2015-07-06

    seen by a ten- fold increase for the first two types of instances. This phenomenon can perhaps be explained by the combinatorial nature of CKVLP, which...consists of a combination of person types t ∈ T , e.g., child , adult, or elderly. A vaccination policy v ∈ V is a delivery of vaccine to certain types of...forb(P,X1) ∩ forb(P,X2) for any partition (X1, X2) of X . Proposition 7 generalizes the main result of [12] regarding cropped cubes. Moreover, the

  17. Global blending optimization of laminated composites with discrete material candidate selection and thickness variation

    DEFF Research Database (Denmark)

    Sørensen, Søren N.; Stolpe, Mathias

    2015-01-01

    rate. The capabilities of the method and the effect of active versus inactive manufacturing constraints are demonstrated on several numerical examples of limited size, involving at most 320 binary variables. Most examples are solved to guaranteed global optimality and may constitute benchmark examples...... but is, however, convex in the original mixed binary nested form. Convexity is the foremost important property of optimization problems, and the proposed method can guarantee the global or near-global optimal solution; unlike most topology optimization methods. The material selection is limited...... for popular topology optimization methods and heuristics based on solving sequences of non-convex problems. The results will among others demonstrate that the difficulty of the posed problem is highly dependent upon the composition of the constitutive properties of the material candidates....

  18. Optimized Policies for Improving Fairness of Location-based Relay Selection

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2013-01-01

    For WLAN systems in which relaying is used to improve throughput performance for nodes located at the cell edge, node mobility and information collection delays can have a significant impact on the performance of a relay selection scheme. In this paper we extend our existing Markov Chain modeling...... framework for relay selection to allow for efficient calculation of relay policies given either mean throughput or kth throughput percentile as optimization criterium. In a scenario with static access point, static relay, and a mobile destination node, the kth throughput percentile optimization...

  19. Simulation and Optimization of Control of Selected Phases of Gyroplane Flight

    Directory of Open Access Journals (Sweden)

    Wienczyslaw Stalewski

    2018-02-01

    Full Text Available Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

  20. Uncertain programming models for portfolio selection with uncertain returns

    Science.gov (United States)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  1. On the Lasserre hierarchy of semidefinite programming relaxations of convex polynomial optimization problems

    NARCIS (Netherlands)

    de Klerk, E.; Laurent, M.

    2011-01-01

    The Lasserre hierarchy of semidefinite programming approximations to convex polynomial optimization problems is known to converge finitely under some assumptions. [J. B. Lasserre, Convexity in semialgebraic geometry and polynomial optimization, SIAM J. Optim., 19 (2009), pp. 1995–2014]. We give a

  2. Ckmeans.1d.dp: Optimal k-means Clustering in One Dimension by Dynamic Programming.

    Science.gov (United States)

    Wang, Haizhou; Song, Mingzhou

    2011-12-01

    The heuristic k -means algorithm, widely used for cluster analysis, does not guarantee optimality. We developed a dynamic programming algorithm for optimal one-dimensional clustering. The algorithm is implemented as an R package called Ckmeans.1d.dp . We demonstrate its advantage in optimality and runtime over the standard iterative k -means algorithm.

  3. Optimized Power Allocation and Relay Location Selection in Cooperative Relay Networks

    Directory of Open Access Journals (Sweden)

    Jianrong Bao

    2017-01-01

    Full Text Available An incremental selection hybrid decode-amplify forward (ISHDAF scheme for the two-hop single relay systems and a relay selection strategy based on the hybrid decode-amplify-and-forward (HDAF scheme for the multirelay systems are proposed along with an optimized power allocation for the Internet of Thing (IoT. Given total power as the constraint and outage probability as an objective function, the proposed scheme possesses good power efficiency better than the equal power allocation. By the ISHDAF scheme and HDAF relay selection strategy, an optimized power allocation for both the source and relay nodes is obtained, as well as an effective reduction of outage probability. In addition, the optimal relay location for maximizing the gain of the proposed algorithm is also investigated and designed. Simulation results show that, in both single relay and multirelay selection systems, some outage probability gains by the proposed scheme can be obtained. In the comparison of the optimized power allocation scheme with the equal power allocation one, nearly 0.1695 gains are obtained in the ISHDAF single relay network at a total power of 2 dB, and about 0.083 gains are obtained in the HDAF relay selection system with 2 relays at a total power of 2 dB.

  4. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    Science.gov (United States)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  5. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  6. Economic optimization and evolutionary programming when using remote sensing data

    OpenAIRE

    Shamin Roman; Alberto Gabriel Enrike; Uryngaliyeva Ayzhana; Semenov Aleksandr

    2018-01-01

    The article considers the issues of optimizing the use of remote sensing data. Built a mathematical model to describe the economic effect of the use of remote sensing data. It is shown that this model is incorrect optimisation task. Given a numerical method of solving this problem. Also discusses how to optimize organizational structure by using genetic algorithm based on remote sensing. The methods considered allow the use of remote sensing data in an optimal way. The proposed mathematical m...

  7. Parallel algorithms for islanded microgrid with photovoltaic and energy storage systems planning optimization problem: Material selection and quantity demand optimization

    Science.gov (United States)

    Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun

    2017-02-01

    With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.

  8. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  9. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems.

    Science.gov (United States)

    Cho, Ming-Yuan; Hoang, Thi Thom

    2017-01-01

    Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO) based support vector machine (SVM) classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR) method with a pseudorandom binary sequence (PRBS) stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  10. Feature Selection and Parameters Optimization of SVM Using Particle Swarm Optimization for Fault Classification in Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-01-01

    Full Text Available Fast and accurate fault classification is essential to power system operations. In this paper, in order to classify electrical faults in radial distribution systems, a particle swarm optimization (PSO based support vector machine (SVM classifier has been proposed. The proposed PSO based SVM classifier is able to select appropriate input features and optimize SVM parameters to increase classification accuracy. Further, a time-domain reflectometry (TDR method with a pseudorandom binary sequence (PRBS stimulus has been used to generate a dataset for purposes of classification. The proposed technique has been tested on a typical radial distribution network to identify ten different types of faults considering 12 given input features generated by using Simulink software and MATLAB Toolbox. The success rate of the SVM classifier is over 97%, which demonstrates the effectiveness and high efficiency of the developed method.

  11. Factors that influence medical student selection of an emergency medicine residency program: implications for training programs.

    Science.gov (United States)

    Love, Jeffrey N; Howell, John M; Hegarty, Cullen B; McLaughlin, Steven A; Coates, Wendy C; Hopson, Laura R; Hern, Gene H; Rosen, Carlo L; Fisher, Jonathan; Santen, Sally A

    2012-04-01

    An understanding of student decision-making when selecting an emergency medicine (EM) training program is essential for program directors as they enter interview season. To build upon preexisting knowledge, a survey was created to identify and prioritize the factors influencing candidate decision-making of U.S. medical graduates. This was a cross-sectional, multi-institutional study that anonymously surveyed U.S. allopathic applicants to EM training programs. It took place in the 3-week period between the 2011 National Residency Matching Program (NRMP) rank list submission deadline and the announcement of match results. Of 1,525 invitations to participate, 870 candidates (57%) completed the survey. Overall, 96% of respondents stated that both geographic location and individual program characteristics were important to decision-making, with approximately equal numbers favoring location when compared to those who favored program characteristics. The most important factors in this regard were preference for a particular geographic location (74.9%, 95% confidence interval [CI] = 72% to 78%) and to be close to spouse, significant other, or family (59.7%, 95% CI = 56% to 63%). Factors pertaining to geographic location tend to be out of the control of the program leadership. The most important program factors include the interview experience (48.9%, 95% CI = 46% to 52%), personal experience with the residents (48.5%, 95% CI = 45% to 52%), and academic reputation (44.9%, 95% CI = 42% to 48%). Unlike location, individual program factors are often either directly or somewhat under the control of the program leadership. Several other factors were ranked as the most important factor a disproportionate number of times, including a rotation in that emergency department (ED), orientation (academic vs. community), and duration of training (3-year vs. 4-year programs). For a subset of applicants, these factors had particular importance in overall decision-making. The vast majority

  12. AHP-Based Optimal Selection of Garment Sizes for Online Shopping

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Garment online shopping has been accepted by more and more consumers in recent years. In online shopping, a buyer only chooses the garment size judged by his own experience without trying-on, so the selected garment may not be the fittest one for the buyer due to the variety of body's figures. Thus, we propose a method of optimal selection of garment sizes for online shopping based on Analytic Hierarchy Process (AHP). The hierarchical structure model for optimal selection of garment sizes is structured and the fittest garment for a buyer is found by calculating the matching degrees between individual's measurements and the corresponding key-part values of ready-to-wear clothing sizes. In order to demonstrate its feasibility, we provide an example of selecting the fittest sizes of men's bottom. The result shows that the proposed method is useful in online clothing sales application.

  13. Selective waste collection optimization in Romania and its impact to urban climate

    Science.gov (United States)

    Mihai, Šercǎianu; Iacoboaea, Cristina; Petrescu, Florian; Aldea, Mihaela; Luca, Oana; Gaman, Florian; Parlow, Eberhard

    2016-08-01

    According to European Directives, transposed in national legislation, the Member States should organize separate collection systems at least for paper, metal, plastic, and glass until 2015. In Romania, since 2011 only 12% of municipal collected waste was recovered, the rest being stored in landfills, although storage is considered the last option in the waste hierarchy. At the same time there was selectively collected only 4% of the municipal waste. Surveys have shown that the Romanian people do not have selective collection bins close to their residencies. The article aims to analyze the current situation in Romania in the field of waste collection and management and to make a proposal for selective collection containers layout, using geographic information systems tools, for a case study in Romania. Route optimization is used based on remote sensing technologies and network analyst protocols. Optimizing selective collection system the greenhouse gases, particles and dust emissions can be reduced.

  14. The Effect of Exit Strategy on Optimal Portfolio Selection with Birandom Returns

    OpenAIRE

    Cao, Guohua; Shan, Dan

    2013-01-01

    The aims of this paper are to use a birandom variable to denote the stock return selected by some recurring technical patterns and to study the effect of exit strategy on optimal portfolio selection with birandom returns. Firstly, we propose a new method to estimate the stock return and use birandom distribution to denote the final stock return which can reflect the features of technical patterns and investors' heterogeneity simultaneously; secondly, we build a birandom safety-first model and...

  15. Selective pressures on C4 photosynthesis evolution in grasses through the lens of optimality

    OpenAIRE

    Akcay, Erol; Zhou, Haoran; Helliker, Brent

    2016-01-01

    CO2, temperature, water availability and light intensity were potential selective pressures to propel the initial evolution and global expansion of C4 photosynthesis in grasses. To tease apart the primary selective pressures along the evolutionary trajectory, we coupled photosynthesis and hydraulics models and optimized photosynthesis over stomatal resistance and leaf/fine-root allocation. We also examined the importance of nitrogen reallocation from the dark to the light reactions. Our resul...

  16. Direct-aperture optimization applied to selection of beam orientations in intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Bedford, J L; Webb, S

    2007-01-01

    Direct-aperture optimization (DAO) was applied to iterative beam-orientation selection in intensity-modulated radiation therapy (IMRT), so as to ensure a realistic segmental treatment plan at each iteration. Nested optimization engines dealt separately with gantry angles, couch angles, collimator angles, segment shapes, segment weights and wedge angles. Each optimization engine performed a random search with successively narrowing step sizes. For optimization of segment shapes, the filtered backprojection (FBP) method was first used to determine desired fluence, the fluence map was segmented, and then constrained direct-aperture optimization was used thereafter. Segment shapes were fully optimized when a beam angle was perturbed, and minimally re-optimized otherwise. The algorithm was compared with a previously reported method using FBP alone at each orientation iteration. An example case consisting of a cylindrical phantom with a hemi-annular planning target volume (PTV) showed that for three-field plans, the method performed better than when using FBP alone, but for five or more fields, neither method provided much benefit over equally spaced beams. For a prostate case, improved bladder sparing was achieved through the use of the new algorithm. A plan for partial scalp treatment showed slightly improved PTV coverage and lower irradiated volume of brain with the new method compared to FBP alone. It is concluded that, although the method is computationally intensive and not suitable for searching large unconstrained regions of beam space, it can be used effectively in conjunction with prior class solutions to provide individually optimized IMRT treatment plans

  17. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  18. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  19. [Prevention of cardiovascular diseases - Prophylactic program in a selected enterprise].

    Science.gov (United States)

    Siedlecka, Jadwiga; Gadzicka, Elżbieta; Szyjkowska, Agata; Siedlecki, Patryk; Szymczak, Wiesław; Makowiec-Dąbrowska, Teresa; Bortkiewicz, Alicja

    2017-10-17

    In Poland cardiovascular diseases (CVD), classified as work-related diseases, are responsible for 25% of disability and cause 50% of all deaths, including 26.9% of deaths in people aged under 65 years. The aim of the study was to analyze employee expectations regarding CVD- oriented prophylactic activities in the selected enterprise. A questionnaire, developed for this study, consists of: socio-demographic data, job characteristics, occupational factors, and questions about the respondents' expectations concerning the prevention program. The study group comprised 407 multi-profile company employees aged (mean) 46.7 years (standard deviation (SD) = 9.1), including 330 men (81.1%), mean age = 46.9 (SD = 9.2) and 77 women (18.9%), mean age = 45.9 (SD = 8.2) The study was performed using the method of auditorium survey. Employees declared the need for actions related to physical activity: use of gym, swimming pool, tennis (56.5%), smoking habits - education sessions on quitting smoking (24.6%). A few people were interested in activities related to healthy diet. According to the majority of the study group, the scope of preventive examinations should be expanded. Based on our own findings and literature data CVD- -oriented preventive program, addressed to the analyzed enterprise was prepared. The program will be presented in another paper. The results showed significant quantitative and qualitative differences in the classic and occupational CVD risk factors between men and women, as well as in preferences for participation in prevention programs. Therefore, gender differences should be taken into account when planning prevention programs. Med Pr 2017;68(6):757-769. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  20. Adaptive feature selection using v-shaped binary particle swarm optimization.

    Science.gov (United States)

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  1. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  2. A new and fast image feature selection method for developing an optimal mammographic mass detection scheme.

    Science.gov (United States)

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-08-01

    Selecting optimal features from a large image feature pool remains a major challenge in developing computer-aided detection (CAD) schemes of medical images. The objective of this study is to investigate a new approach to significantly improve efficacy of image feature selection and classifier optimization in developing a CAD scheme of mammographic masses. An image dataset including 1600 regions of interest (ROIs) in which 800 are positive (depicting malignant masses) and 800 are negative (depicting CAD-generated false positive regions) was used in this study. After segmentation of each suspicious lesion by a multilayer topographic region growth algorithm, 271 features were computed in different feature categories including shape, texture, contrast, isodensity, spiculation, local topological features, as well as the features related to the presence and location of fat and calcifications. Besides computing features from the original images, the authors also computed new texture features from the dilated lesion segments. In order to select optimal features from this initial feature pool and build a highly performing classifier, the authors examined and compared four feature selection methods to optimize an artificial neural network (ANN) based classifier, namely: (1) Phased Searching with NEAT in a Time-Scaled Framework, (2) A sequential floating forward selection (SFFS) method, (3) A genetic algorithm (GA), and (4) A sequential forward selection (SFS) method. Performances of the four approaches were assessed using a tenfold cross validation method. Among these four methods, SFFS has highest efficacy, which takes 3%-5% of computational time as compared to GA approach, and yields the highest performance level with the area under a receiver operating characteristic curve (AUC) = 0.864 ± 0.034. The results also demonstrated that except using GA, including the new texture features computed from the dilated mass segments improved the AUC results of the ANNs optimized

  3. Optimal relay selection and power allocation for cognitive two-way relaying networks

    KAUST Repository

    Pandarakkottilil, Ubaidulla; Aï ssa, Sonia

    2012-01-01

    In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays

  4. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  5. Optimal individual supervised hyperspectral band selection distinguishing savannah trees at leaf level

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-08-01

    Full Text Available computer intensive search technique to find the bands optimizing the value of TSAM as a function of the bands, by continually updating this function at succes- sive steps. Band selection by means of minimizing the total accumulated correlation...

  6. Compensatory Analysis and Optimization for MADM for Heterogeneous Wireless Network Selection

    Directory of Open Access Journals (Sweden)

    Jian Zhou

    2016-01-01

    Full Text Available In the next-generation heterogeneous wireless networks, a mobile terminal with a multi-interface may have network access from different service providers using various technologies. In spite of this heterogeneity, seamless intersystem mobility is a mandatory requirement. One of the major challenges for seamless mobility is the creation of a network selection scheme, which is for users that select an optimal network with best comprehensive performance between different types of networks. However, the optimal network may be not the most reasonable one due to compensation of MADM (Multiple Attribute Decision Making, and the network is called pseudo-optimal network. This paper conducts a performance evaluation of a number of widely used MADM-based methods for network selection that aim to keep the mobile users always best connected anywhere and anytime, where subjective weight and objective weight are all considered. The performance analysis shows that the selection scheme based on MEW (weighted multiplicative method and combination weight can better avoid accessing pseudo-optimal network for balancing network load and reducing ping-pong effect in comparison with three other MADM solutions.

  7. Comonotonic approximations for a generalized provisioning problem with application to optimal portfolio selection

    NARCIS (Netherlands)

    van Weert, K.; Dhaene, J.; Goovaerts, M.

    2011-01-01

    In this paper we discuss multiperiod portfolio selection problems related to a specific provisioning problem. Our results are an extension of Dhaene et al. (2005) [14], where optimal constant mix investment strategies are obtained in a provisioning and savings context, using an analytical approach

  8. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Zhe

    2014-01-01

    The investment of large scale offshore wind farms is high in which the electrical system has a significant contribution to the total cost. As one of the key components, the cost of the connection cables affects the initial investment a lot. The development of cable manufacturing provides a vast...... and systematical way for the optimal selection of cables in large scale offshore wind farms....

  9. Self-Regulatory Strategies in Daily Life: Selection, Optimization, and Compensation and Everyday Memory Problems

    Science.gov (United States)

    Robinson, Stephanie A.; Rickenbach, Elizabeth H.; Lachman, Margie E.

    2016-01-01

    The effective use of self-regulatory strategies, such as selection, optimization, and compensation (SOC) requires resources. However, it is theorized that SOC use is most advantageous for those experiencing losses and diminishing resources. The present study explored this seeming paradox within the context of limitations or constraints due to…

  10. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  11. Analysis of multicriteria models application for selection of an optimal artificial lift method in oil production

    Directory of Open Access Journals (Sweden)

    Crnogorac Miroslav P.

    2016-01-01

    Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.

  12. Optimal relay selection and power allocation for cognitive two-way relaying networks

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2012-06-01

    In this paper, we present an optimal scheme for power allocation and relay selection in a cognitive radio network where a pair of cognitive (or secondary) transceiver nodes communicate with each other assisted by a set of cognitive two-way relays. The secondary nodes share the spectrum with a licensed primary user (PU), and each node is assumed to be equipped with a single transmit/receive antenna. The interference to the PU resulting from the transmission from the cognitive nodes is kept below a specified limit. We propose joint relay selection and optimal power allocation among the secondary user (SU) nodes achieving maximum throughput under transmit power and PU interference constraints. A closed-form solution for optimal allocation of transmit power among the SU transceivers and the SU relay is presented. Furthermore, numerical simulations and comparisons are presented to illustrate the performance of the proposed scheme. © 2012 IEEE.

  13. Project evaluation and selection using fuzzy Delphi method and zero - one goal programming

    Science.gov (United States)

    Alias, Suriana; Adna, Nofarziah; Arsad, Roslah; Soid, Siti Khuzaimah; Ali, Zaileha Md

    2014-12-01

    Project evaluation and selection is a factor affecting the impotence of board director in which is trying to maximize all the possible goals. Assessment of the problem occurred in organization plan is the first phase for decision making process. The company needs a group of expert to evaluate the problems. The Fuzzy Delphi Method (FDM) is a systematic procedure to evoke the group's opinion in order to get the best result to evaluate the project performance. This paper proposes an evaluation and selection of the best alternative project based on combination of FDM and Zero - One Goal Programming (ZOGP) formulation. ZOGP is used to solve the multi-criteria decision making for final decision part by using optimization software LINDO 6.1. An empirical example on an ongoing decision making project in Johor, Malaysia is implemented for case study.

  14. Bias due to sample selection in propensity score matching for a supportive housing program evaluation in New York City.

    Directory of Open Access Journals (Sweden)

    Sungwoo Lim

    Full Text Available OBJECTIVES: Little is known about influences of sample selection on estimation in propensity score matching. The purpose of the study was to assess potential selection bias using one-to-one greedy matching versus optimal full matching as part of an evaluation of supportive housing in New York City (NYC. STUDY DESIGN AND SETTINGS: Data came from administrative data for 2 groups of applicants who were eligible for an NYC supportive housing program in 2007-09, including chronically homeless adults with a substance use disorder and young adults aging out of foster care. We evaluated the 2 matching methods in their ability to balance covariates and represent the original population, and in how those methods affected outcomes related to Medicaid expenditures. RESULTS: In the population with a substance use disorder, only optimal full matching performed well in balancing covariates, whereas both methods created representative populations. In the young adult population, both methods balanced covariates effectively, but only optimal full matching created representative populations. In the young adult population, the impact of the program on Medicaid expenditures was attenuated when one-to-one greedy matching was used, compared with optimal full matching. CONCLUSION: Given covariate balancing with both methods, attenuated program impacts in the young adult population indicated that one-to-one greedy matching introduced selection bias.

  15. Path selection and bandwidth allocation in MPLS networks: a nonlinear programming approach

    Science.gov (United States)

    Burns, J. E.; Ott, Teunis J.; de Kock, Johan M.; Krzesinski, Anthony E.

    2001-07-01

    Multi-protocol Label Switching extends the IPv4 destination-based routing protocols to provide new and scalable routing capabilities in connectionless networks using relatively simple packet forwarding mechanisms. MPLS networks carry traffic on virtual connections called label switched paths. This paper considers path selection and bandwidth allocation in MPLS networks in order to optimize the network quality of service. The optimization is based upon the minimization of a non-linear objective function which under light load simplifies to OSPF routing with link metrics equal to the link propagation delays. The behavior under heavy load depends on the choice of certain parameters: It can essentially be made to minimize maximal expected utilization, or to maximize minimal expected weighted slacks (both over all links). Under certain circumstances it can be made to minimize the probability that a link has an instantaneous offered load larger than its transmission capacity. We present a model of an MPLS network and an algorithm to find and capacitate optimal LSPs. The algorithm is an improvement of the well-known flow deviation non-linear programming method. The algorithm is applied to compute optimal LSPs for several test networks carrying a single traffic class.

  16. Optimal scheduling of micro grids based on single objective programming

    Science.gov (United States)

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  17. An Examination of Program Selection Criteria for Part-Time MBA Students

    Science.gov (United States)

    Colburn, Michael; Fox, Daniel E.; Westerfelt, Debra Kay

    2011-01-01

    Prospective graduate students select a graduate program as a result of a multifaceted decision-making process. This study examines the selection criteria that part-time MBA students used in selecting a program at a private university. Further, it analyzes the methods by which the students first learned of the MBA program. The authors posed the…

  18. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    International Nuclear Information System (INIS)

    Kim, Minsun; Stewart, Robert D.; Phillips, Mark H.

    2015-01-01

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T d ), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D mean ≤ 45 Gy), lungs (D mean ≤ 20 Gy), cord (D max ≤ 45 Gy), esophagus (D max ≤ 63 Gy), and unspecified tissues (D 05 ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D 95 of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T d (3–100 days), tumor lag-time (T k = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D 95 were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T d and T k used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T d less than 10 days, there was no

  19. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D. [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 (United States); Phillips, Mark H. [Departments of Radiation Oncology and Neurological Surgery, University of Washington, Seattle, Washington 98195-6043 (United States)

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumor target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating

  20. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-07-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  1. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Directory of Open Access Journals (Sweden)

    Muhammad Hashim

    2017-05-01

    Full Text Available Purpose:  The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM. Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM with fuzzy coefficients constructed in present research should be helpful for

  2. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    International Nuclear Information System (INIS)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-01-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world

  3. The New Multipoint Relays Selection in OLSR using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Razali Ngah

    2012-06-01

    Full Text Available The standard Optimized Link State Routing (OLSR introduces an interesting concept, the multipoint relays (MPRs, to mitigate message overhead during the flooding process. We propose a new algorithm for MPRs selection to enhance the performance of OLSR using Particle Swarm Optimization Sigmoid Increasing Inertia Weight (PSOSIIW. The sigmoid increasing inertia weight has significance improve the particle swarm optimization (PSO in terms of simplicity and quick convergence towards optimum solution. The new fitness function of PSO-SIIW, packet delay of each node and degree of willingness are introduced to support MPRs selection in OLSR. We examine the throughput, packet loss and end-to-end delay of the proposed method using network simulator 2 (ns2.  Overall results indicate that OLSR-PSOSIIW has shown good performance compared to the standard OLSR and OLSR-PSO, particularly for the throughput and end-to-end delay. Generally the proposed OLSR-PSOSIIW shows advantage of using PSO for optimizing routing paths in the MPRs selection algorithm.

  4. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  5. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  6. Chiral stationary phase optimized selectivity liquid chromatography: A strategy for the separation of chiral isomers.

    Science.gov (United States)

    Hegade, Ravindra Suryakant; De Beer, Maarten; Lynen, Frederic

    2017-09-15

    Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    National Research Council Canada - National Science Library

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  8. Communication strategies to optimize commitments and investments in iron programming.

    Science.gov (United States)

    Griffiths, Marcia

    2002-04-01

    There is consensus that a communications component is crucial to the success of iron supplementation and fortification programs. However, in many instances, we have not applied what we know about successful advocacy and program communications to iron programs. Communication must play a larger and more central role in iron programs to overcome several common shortcomings and allow the use of new commitments and investments in iron programming to optimum advantage. One shortcoming is that iron program communication has been driven primarily by the supply side of the supply-demand continuum. That is, technical information has been given without thought for what people want to know or do. To overcome this, the communication component, which should be responsive to the consumer perspective, must be considered at program inception, not enlisted late in the program cycle as a remedy when interventions fail to reach their targets. Another shortcoming is the lack of program focus on behavior. Because the "technology" of iron, a supplement, or fortified or specific local food must be combined with appropriate consumer behavior, it is not enough to promote the technology. The appropriate use of technology must be ensured, and this requires precise and strategically crafted communications. A small number of projects from countries as diverse as Indonesia, Egypt, Nicaragua and Peru offer examples of successful communications efforts and strategies for adaptation by other countries.

  9. Artificial intelligence programming with LabVIEW: genetic algorithms for instrumentation control and optimization.

    Science.gov (United States)

    Moore, J H

    1995-06-01

    A genetic algorithm for instrumentation control and optimization was developed using the LabVIEW graphical programming environment. The usefulness of this methodology for the optimization of a closed loop control instrument is demonstrated with minimal complexity and the programming is presented in detail to facilitate its adaptation to other LabVIEW applications. Closed loop control instruments have variety of applications in the biomedical sciences including the regulation of physiological processes such as blood pressure. The program presented here should provide a useful starting point for those wishing to incorporate genetic algorithm approaches to LabVIEW mediated optimization of closed loop control instruments.

  10. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  11. Optimal load suitability based RAT selection for HSDPA and IEEE 802.11e

    DEFF Research Database (Denmark)

    Prasad, Ramjee; Cabral, O.; Felez, F.J.

    2009-01-01

    are a premium. This paper investigates cooperation between networks based Radio Access Technology (RAT) selection algorithm that uses suitability to optimize the choice between WiFi and High Speed Downlink Packet Access (HSDPA). It has been shown that this approach has the potential to provide gain...... by allocating a user terminal to the most preferred network based on traffic type and network load. Optimal load threshold values that maximise the total QoS throughput for the given interworking scenario are 0.6 and 0.53 for HSDPA and WiFi, respectively. This corresponds to a CRRM gain on throughput of 80...

  12. European advanced driver training programs: Reasons for optimism

    Directory of Open Access Journals (Sweden)

    Simon Washington

    2011-03-01

    This paper reviews the predominant features and empirical evidence surrounding post licensing advanced driver training programs focused on novice drivers. A clear articulation of differences between the renewed and current US advanced driver training programs is provided. While the individual quantitative evaluations range from marginally to significantly effective in reducing novice driver crash risk, they have been criticized for evaluation deficiencies ranging from small sample sizes to confounding variables to lack of exposure metrics. Collectively, however, the programs sited in the paper suggest at least a marginally positive effect that needs to be validated with further studies. If additional well controlled studies can validate these programs, a pilot program in the US should be considered.

  13. Optimal Charging of Electric Drive Vehicles: A Dynamic Programming Approach

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Capion, Karsten Emil; Juul, Nina

    2013-01-01

    , therefore, we propose an ex ante vehicle aggregation approach. We illustrate the results in a Danish case study and find that, although optimal management of the vehicles does not allow for storage and day-to-day flexibility in the electricity system, the market provides incentive for intra-day flexibility....

  14. A Linear Programming Reformulation of the Standard Quadratic Optimization Problem

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.

    2005-01-01

    The problem of minimizing a quadratic form over the standard simplex is known as the standard quadratic optimization problem (SQO).It is NPhard, and contains the maximum stable set problem in graphs as a special case.In this note we show that the SQO problem may be reformulated as an (exponentially

  15. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  16. On the non-stationarity of financial time series: impact on optimal portfolio selection

    International Nuclear Information System (INIS)

    Livan, Giacomo; Inoue, Jun-ichi; Scalas, Enrico

    2012-01-01

    We investigate the possible drawbacks of employing the standard Pearson estimator to measure correlation coefficients between financial stocks in the presence of non-stationary behavior, and we provide empirical evidence against the well-established common knowledge that using longer price time series provides better, more accurate, correlation estimates. Then, we investigate the possible consequences of instabilities in empirical correlation coefficient measurements on optimal portfolio selection. We rely on previously published works which provide a framework allowing us to take into account possible risk underestimations due to the non-optimality of the portfolio weights being used in order to distinguish such non-optimality effects from risk underestimations genuinely due to non-stationarities. We interpret such results in terms of instabilities in some spectral properties of portfolio correlation matrices. (paper)

  17. Optimal selection of LQR parameter using AIS for LFC in a multi-area power system

    Directory of Open Access Journals (Sweden)

    Muhammad Abdillah

    2016-12-01

    Full Text Available This paper proposes a method to optimize the parameter of the linear quadratic regulator (LQR using artificial immune system (AIS via clonal selection. The parameters of LQR utilized in this paper are the weighting matrices Q and R. The optimal LQR control for load frequency control (LFC is installed on each area as a decentralized control scheme. The aim of this control design is to improve the dynamic performance of LFC automatically when unexpected load change occurred on power system network. The change of load demands 0.01 p.u used as a disturbance is applied to LFC in Area 1. The proposed method guarantees the stability of the overall closed-loop system. The simulation result shows that the proposed method can reduce the overshoot of the system and compress the time response to steady-state which is better compared to trial error method (TEM and without optimal LQR control.

  18. Dose optimization based on linear programming implemented in a system for treatment planning in Monte Carlo

    International Nuclear Information System (INIS)

    Ureba, A.; Palma, B. A.; Leal, A.

    2011-01-01

    Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.

  19. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    Science.gov (United States)

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  20. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  1. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  2. A Generalized Measure for the Optimal Portfolio Selection Problem and its Explicit Solution

    Directory of Open Access Journals (Sweden)

    Zinoviy Landsman

    2018-03-01

    Full Text Available In this paper, we offer a novel class of utility functions applied to optimal portfolio selection. This class incorporates as special cases important measures such as the mean-variance, Sharpe ratio, mean-standard deviation and others. We provide an explicit solution to the problem of optimal portfolio selection based on this class. Furthermore, we show that each measure in this class generally reduces to the efficient frontier that coincides or belongs to the classical mean-variance efficient frontier. In addition, a condition is provided for the existence of the a one-to-one correspondence between the parameter of this class of utility functions and the trade-off parameter λ in the mean-variance utility function. This correspondence essentially provides insight into the choice of this parameter. We illustrate our results by taking a portfolio of stocks from National Association of Securities Dealers Automated Quotation (NASDAQ.

  3. Optimal Channel Selection Based on Online Decision and Offline Learning in Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mu Qiao

    2017-01-01

    Full Text Available We propose a channel selection strategy with hybrid architecture, which combines the centralized method and the distributed method to alleviate the overhead of access point and at the same time provide more flexibility in network deployment. By this architecture, we make use of game theory and reinforcement learning to fulfill the optimal channel selection under different communication scenarios. Particularly, when the network can satisfy the requirements of energy and computational costs, the online decision algorithm based on noncooperative game can help each individual sensor node immediately select the optimal channel. Alternatively, when the network cannot satisfy the requirements of energy and computational costs, the offline learning algorithm based on reinforcement learning can help each individual sensor node to learn from its experience and iteratively adjust its behavior toward the expected target. Extensive simulation results validate the effectiveness of our proposal and also prove that higher system throughput can be achieved by our channel selection strategy over the conventional off-policy channel selection approaches.

  4. MULTI-CRITERIA PROGRAMMING METHODS AND PRODUCTION PLAN OPTIMIZATION PROBLEM SOLVING IN METAL INDUSTRY

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-09-01

    Full Text Available This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained results indicate a high efficiency of the applied methods in solving the problem.

  5. Optimization of programming parameters in children with the advanced bionics cochlear implant.

    Science.gov (United States)

    Baudhuin, Jacquelyn; Cadieux, Jamie; Firszt, Jill B; Reeder, Ruth M; Maxson, Jerrica L

    2012-05-01

    -to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). T-levels set 10 CUs below "soft" resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children. American Academy of Audiology.

  6. Multiobjective optimization in Gene Expression Programming for Dew Point

    OpenAIRE

    Shroff, Siddharth; Dabhi, Vipul

    2013-01-01

    The processes occurring in climatic change evolution and their variations play a major role in environmental engineering. Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing, at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. This research aims to use Gene Expression Programming ...

  7. Automated design and optimization of flexible booster autopilots via linear programming. Volume 2: User's manual

    Science.gov (United States)

    Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.

    1972-01-01

    COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.

  8. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  9. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    Science.gov (United States)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  10. Analysis and optimization with ecological objective function of irreversible single resonance energy selective electron heat engines

    International Nuclear Information System (INIS)

    Zhou, Junle; Chen, Lingen; Ding, Zemin; Sun, Fengrui

    2016-01-01

    Ecological performance of a single resonance ESE heat engine with heat leakage is conducted by applying finite time thermodynamics. By introducing Nielsen function and numerical calculations, expressions about power output, efficiency, entropy generation rate and ecological objective function are derived; relationships between ecological objective function and power output, between ecological objective function and efficiency as well as between power output and efficiency are demonstrated; influences of system parameters of heat leakage, boundary energy and resonance width on the optimal performances are investigated in detail; a specific range of boundary energy is given as a compromise to make ESE heat engine system work at optimal operation regions. Comparing performance characteristics with different optimization objective functions, the significance of selecting ecological objective function as the design objective is clarified specifically: when changing the design objective from maximum power output into maximum ecological objective function, the improvement of efficiency is 4.56%, while the power output drop is only 2.68%; when changing the design objective from maximum efficiency to maximum ecological objective function, the improvement of power output is 229.13%, and the efficiency drop is only 13.53%. - Highlights: • An irreversible single resonance energy selective electron heat engine is studied. • Heat leakage between two reservoirs is considered. • Power output, efficiency and ecological objective function are derived. • Optimal performance comparison for three objective functions is carried out.

  11. A Sequential Convex Semidefinite Programming Algorithm for Multiple-Load Free Material Optimization

    Czech Academy of Sciences Publication Activity Database

    Stingl, M.; Kočvara, Michal; Leugering, G.

    2009-01-01

    Roč. 20, č. 1 (2009), s. 130-155 ISSN 1052-6234 R&D Projects: GA AV ČR IAA1075402 Grant - others:commision EU(XE) EU-FP6-30717 Institutional research plan: CEZ:AV0Z10750506 Keywords : structural optimization * material optimization * semidefinite programming * sequential convex programming Subject RIV: BA - General Mathematics Impact factor: 1.429, year: 2009

  12. Relationship between Maximum Principle and Dynamic Programming for Stochastic Recursive Optimal Control Problems and Applications

    Directory of Open Access Journals (Sweden)

    Jingtao Shi

    2013-01-01

    Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.

  13. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    Science.gov (United States)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  14. Dynamic programming for optimization of timber production and grazing in ponderosa pine

    Science.gov (United States)

    Kurt H. Riitters; J. Douglas Brodie; David W. Hann

    1982-01-01

    Dynamic programming procedures are presented for optimizing thinning and rotation of even-aged ponderosa pine by using the four descriptors: age, basal area, number of trees, and time since thinning. Because both timber yield and grazing yield are functions of stand density, the two outputs-forage and timber-can both be optimized. The soil expectation values for single...

  15. Stochastic optimization in insurance a dynamic programming approach

    CERN Document Server

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  16. Adaptive Decision Making Using Probabilistic Programming and Stochastic Optimization

    Science.gov (United States)

    2018-01-01

    world optimization problems (and hence 16 Approved for Public Release (PA); Distribution Unlimited Pred. demand (uncertain; discrete ...simplify the setting, we further assume that the demands are discrete , taking on values d1, . . . , dk with probabilities (conditional on x) (pθ)i ≡ p...Tyrrell Rockafellar. Implicit functions and solution mappings. Springer Monogr. Math ., 2009. Anthony V Fiacco and Yo Ishizuka. Sensitivity and stability

  17. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    Science.gov (United States)

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  18. Integrated Method for Optimizing Connection Layout and Cable Selection for an Internal Network of a Wind Farm

    Directory of Open Access Journals (Sweden)

    Andrzej Wędzik

    2015-09-01

    Full Text Available An internal network of a wind farm is similar to a wide network structure. Wind turbines are deployed over a vast area, and cable lines used to interconnect them may have lengths reaching tens of kilometres. The cost of constructing such a network is a major component of the entire investment. Therefore, it is advisable to develop a configuration of such a farm’s internal connections which will minimise the cost, while complying with technical requirements even at the design stage. So far this has usually been done within two independent processes. At first the network structure ensuring the shortest possible connections between the turbines is determined. Then appropriate cables compliant with technical regulations are selected for the specified structure. But does this design approach ensure the optimal (lowest investment cost? This paper gives an answer to this question. A method for accomplishing the task given in the title is presented. Examples of calculations are presented and results are compared for the two methods of optimal wind farm internal connection structure design and cable cross-section dimensioning: two-stage and integrated. The usefulness of employing the Mixed Integer Nonlinear Programming (MNLP method in the process of determining the optimal structure of a wind farm’s cable network is demonstrated.

  19. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  20. Development of a marker assisted selection program for cacao.

    Science.gov (United States)

    Schnell, R J; Kuhn, D N; Brown, J S; Olano, C T; Phillips-Mora, W; Amores, F M; Motamayor, J C

    2007-12-01

    ABSTRACT Production of cacao in tropical America has been severely affected by fungal pathogens causing diseases known as witches' broom (WB, caused by Moniliophthora perniciosa), frosty pod (FP, caused by M. roreri) and black pod (BP, caused by Phytophthora spp.). BP is pan-tropical and causes losses in all producing areas. WB is found in South America and parts of the Caribbean, while FP is found in Central America and parts of South America. Together, these diseases were responsible for over 700 million US dollars in losses in 2001 (4). Commercial cacao production in West Africa and South Asia are not yet affected by WB and FP, but cacao grown in these regions is susceptible to both. With the goal of providing new disease resistant cultivars the USDA-ARS and Mars, Inc. have developed a marker assisted selection (MAS) program. Quantitative trait loci have been identified for resistance to WB, FP, and BP. The potential usefulness of these markers in identifying resistant individuals has been confirmed in an experimental F(1) family in Ecuador.

  1. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  2. Optimal Electrode Selection for Electrical Resistance Tomography in Carbon Fiber Reinforced Polymer Composites

    Science.gov (United States)

    Escalona Galvis, Luis Waldo; Diaz-Montiel, Paulina; Venkataraman, Satchi

    2017-01-01

    Electrical Resistance Tomography (ERT) offers a non-destructive evaluation (NDE) technique that takes advantage of the inherent electrical properties in carbon fiber reinforced polymer (CFRP) composites for internal damage characterization. This paper investigates a method of optimum selection of sensing configurations for delamination detection in thick cross-ply laminates using ERT. Reduction in the number of sensing locations and measurements is necessary to minimize hardware and computational effort. The present work explores the use of an effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations resulting from selecting sensing electrode pairs. Singular Value Decomposition (SVD) is applied to obtain a spectral representation of the resistance measurements in the laminate for subsequent EI based reduction to take place. The electrical potential field in a CFRP laminate is calculated using finite element analysis (FEA) applied on models for two different laminate layouts considering a set of specified delamination sizes and locations with two different sensing arrangements. The effectiveness of the EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set and the reduced set of resistance measurements. This investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERT based damage detection. PMID:28772485

  3. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    Science.gov (United States)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  4. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    Science.gov (United States)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  5. Optimization of a genomic breeding program for a moderately sized dairy cattle population.

    Science.gov (United States)

    Reiner-Benaim, A; Ezra, E; Weller, J I

    2017-04-01

    Although it now standard practice to genotype thousands of female calves, genotyping of bull calves is generally limited to progeny of elite cows. In addition to genotyping costs, increasing the pool of candidate sires requires purchase, isolation, and identification of calves until selection decisions are made. We economically optimized via simulation a genomic breeding program for a population of approximately 120,000 milk-recorded cows, corresponding to the Israeli Holstein population. All 30,000 heifers and 60,000 older cows of parities 1 to 3 were potential bull dams. Animals were assumed to have genetic evaluations for a trait with heritability of 0.25 derived by an animal model evaluation of the population. Only bull calves were assumed to be genotyped. A pseudo-phenotype corresponding to each animal's genetic evaluation was generated, consisting of the animal's genetic value plus a residual with variance set to obtain the assumed reliability for each group of animals. Between 4 and 15 bulls and between 200 and 27,000 cows with the highest pseudo-phenotypes were selected as candidate bull parents. For all progeny of the founder animals, genetic values were simulated as the mean of the parental values plus a Mendelian sampling effect with variance of 0.5. A probability of 0.3 for a healthy bull calf per mating, and a genomic reliability of 0.43 were assumed. The 40 bull calves with the highest genomic evaluations were selected for general service for 1 yr. Costs included genotyping of candidate bulls and their dams, purchase of the calves from the farmers, and identification. Costs of raising culled calves were partially recovered by resale for beef. Annual costs were estimated as $10,922 + $305 × candidate bulls. Nominal profit per cow per genetic standard deviation was $106. Economic optimum with a discount rate of 5%, first returns after 4 yr, and a profit horizon of 15 yr were obtained with genotyping 1,620 to 1,750 calves for all numbers of bull sires

  6. Mortgage Loan Portfolio Optimization Using Multi-Stage Stochastic Programming

    DEFF Research Database (Denmark)

    Rasmussen, Kourosh Marjani; Clausen, Jens

    2007-01-01

    We consider the dynamics of the Danish mortgage loan system and propose several models to reflect the choices of a mortgagor as well as his attitude towards risk. The models are formulated as multi stage stochastic integer programs, which are difficult to solve for more than 10 stages. Scenario...

  7. An algebraic programming style for numerical software and its optimization

    NARCIS (Netherlands)

    T.B. Dinesh; M. Haveraaen; J. Heering (Jan)

    1998-01-01

    textabstract The abstract mathematical theory of partial differential equations (PDEs) is formulated in terms of manifolds, scalar fields, tensors, and the like, but these algebraic structures are hardly recognizable in actual PDE solvers. The general aim of the Sophus programming style is to

  8. Optimization of selective inversion recovery magnetization transfer imaging for macromolecular content mapping in the human brain.

    Science.gov (United States)

    Dortch, Richard D; Bagnato, Francesca; Gochberg, Daniel F; Gore, John C; Smith, Seth A

    2018-03-24

    To optimize a selective inversion recovery (SIR) sequence for macromolecular content mapping in the human brain at 3.0T. SIR is a quantitative method for measuring magnetization transfer (qMT) that uses a low-power, on-resonance inversion pulse. This results in a biexponential recovery of free water signal that can be sampled at various inversion/predelay times (t I/ t D ) to estimate a subset of qMT parameters, including the macromolecular-to-free pool-size-ratio (PSR), the R 1 of free water (R 1f ), and the rate of MT exchange (k mf ). The adoption of SIR has been limited by long acquisition times (≈4 min/slice). Here, we use Cramér-Rao lower bound theory and data reduction strategies to select optimal t I /t D combinations to reduce imaging times. The schemes were experimentally validated in phantoms, and tested in healthy volunteers (N = 4) and a multiple sclerosis patient. Two optimal sampling schemes were determined: (i) a 5-point scheme (k mf estimated) and (ii) a 4-point scheme (k mf assumed). In phantoms, the 5/4-point schemes yielded parameter estimates with similar SNRs as our previous 16-point scheme, but with 4.1/6.1-fold shorter scan times. Pair-wise comparisons between schemes did not detect significant differences for any scheme/parameter. In humans, parameter values were consistent with published values, and similar levels of precision were obtained from all schemes. Furthermore, fixing k mf reduced the sensitivity of PSR to partial-volume averaging, yielding more consistent estimates throughout the brain. qMT parameters can be robustly estimated in ≤1 min/slice (without independent measures of ΔB 0 , B1+, and T 1 ) when optimized t I -t D combinations are selected. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Switches in Genomic GC Content Drive Shifts of Optimal Codons under Sustained Selection on Synonymous Sites

    Science.gov (United States)

    Sun, Yu; Tamarit, Daniel

    2017-01-01

    Abstract The major codon preference model suggests that codons read by tRNAs in high concentrations are preferentially utilized in highly expressed genes. However, the identity of the optimal codons differs between species although the forces driving such changes are poorly understood. We suggest that these questions can be tackled by placing codon usage studies in a phylogenetic framework and that bacterial genomes with extreme nucleotide composition biases provide informative model systems. Switches in the background substitution biases from GC to AT have occurred in Gardnerella vaginalis (GC = 32%), and from AT to GC in Lactobacillus delbrueckii (GC = 62%) and Lactobacillus fermentum (GC = 63%). We show that despite the large effects on codon usage patterns by these switches, all three species evolve under selection on synonymous sites. In G. vaginalis, the dramatic codon frequency changes coincide with shifts of optimal codons. In contrast, the optimal codons have not shifted in the two Lactobacillus genomes despite an increased fraction of GC-ending codons. We suggest that all three species are in different phases of an on-going shift of optimal codons, and attribute the difference to a stronger background substitution bias and/or longer time since the switch in G. vaginalis. We show that comparative and correlative methods for optimal codon identification yield conflicting results for genomes in flux and discuss possible reasons for the mispredictions. We conclude that switches in the direction of the background substitution biases can drive major shifts in codon preference patterns even under sustained selection on synonymous codon sites. PMID:27540085

  10. A note on “An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems”

    OpenAIRE

    R. Venkata Rao

    2012-01-01

    A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...

  11. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  12. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  13. De Novo generation of molecular structures using optimization to select graphs on a given lattice

    DEFF Research Database (Denmark)

    Bywater, R.P.; Poulsen, Thomas Agersten; Røgen, Peter

    2004-01-01

    A recurrent problem in organic chemistry is the generation of new molecular structures that conform to some predetermined set of structural constraints that are imposed in an endeavor to build certain required properties into the newly generated structure. An example of this is the pharmacophore...... model, used in medicinal chemistry to guide de novo design or selection of suitable structures from compound databases. We propose here a method that efficiently links up a selected number of required atom positions while at the same time directing the emergent molecular skeleton to avoid forbidden...... positions. The linkage process takes place on a lattice whose unit step length and overall geometry is designed to match typical architectures of organic molecules. We use an optimization method to select from the many different graphs possible. The approach is demonstrated in an example where crystal...

  14. Optimization of axial enrichment and gadolinia distributions for BWR fuel under control rod programming, (2)

    International Nuclear Information System (INIS)

    Hida, Kazuki; Yoshioka, Ritsuo

    1992-01-01

    A method has been developed for optimizing the axial enrichment and gadolinia distributions for the reload BWR fuel under control rod programming. The problem was to minimize the enrichment requirement subject to the criticality and axial power peaking constraints. The optimization technique was based on the successive linear programming method, each linear programming problem being solved by a goal programming algorithm. A rapid and practically accurate core neutronics model, named the modified one-dimensional core model, was developed to describe the batch-averaged burnup behavior of the reload fuel. A core burnup simulation algorithm, employing a burnup-power-void iteration, was also developed to calculate the rigorous equilibrium cycle performance. This method was applied to the optimization of axial two- and 24-region fuels for demonstrative purposes. The optimal solutions for both fuels have proved the optimality of what is called burnup shape optimization spectral shift. For the two-region fuel with a practical power peaking of 1.4, the enrichment distribution was nearly uniform, because a bottom-peaked burnup shape flattens the axial power shape. Optimization of the 24-region fuel has shown a potential improvement in BWR fuel cycle economics, which will guide future advancement in BWR fuel designs. (author)

  15. Development of selective photoionization spectroscopy technology - Development of a computer program to calculate selective ionization of atoms with multistep processes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Soon; Nam, Baek Il [Myongji University, Seoul (Korea, Republic of)

    1995-08-01

    We have developed computer programs to calculate 2-and 3-step selective resonant multiphoton ionization of atoms. Autoionization resonances in the final continuum can be put into account via B-Spline basis set method. 8 refs., 5 figs. (author)

  16. ADAM: A computer program to simulate selective-breeding schemes for animals

    DEFF Research Database (Denmark)

    Pedersen, L D; Sørensen, A C; Henryon, M

    2009-01-01

    ADAM is a computer program that models selective breeding schemes for animals using stochastic simulation. The program simulates a population of animals and traces the genetic changes in the population under different selective breeding scenarios. It caters to different population structures......, genetic models, selection strategies, and mating designs. ADAM can be used to evaluate breeding schemes and generate genetic data to test statistical tools...

  17. Pythran: enabling static optimization of scientific Python programs

    Science.gov (United States)

    Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan

    2015-01-01

    Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.

  18. Signal Timing Optimization Based on Fuzzy Compromise Programming for Isolated Signalized Intersection

    Directory of Open Access Journals (Sweden)

    Dexin Yu

    2016-01-01

    Full Text Available In order to optimize the signal timing for isolated intersection, a new method based on fuzzy programming approach is proposed in this paper. Considering the whole operation efficiency of the intersection comprehensively, traffic capacity, vehicle cycle delay, cycle stops, and exhaust emission are chosen as optimization goals to establish a multiobjective function first. Then fuzzy compromise programming approach is employed to give different weight coefficients to various optimization objectives for different traffic flow ratios states. And then the multiobjective function is converted to a single objective function. By using genetic algorithm, the optimized signal cycle and effective green time can be obtained. Finally, the performance of the traditional method and new method proposed in this paper is compared and analyzed through VISSIM software. It can be concluded that the signal timing optimized in this paper can effectively reduce vehicle delays and stops, which can improve traffic capacity of the intersection as well.

  19. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  20. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  1. The stock selection problem: Is the stock selection approach more important than the optimization method? Evidence from the Danish stock market

    OpenAIRE

    Grobys, Klaus

    2011-01-01

    Passive investment strategies basically aim to replicate an underlying benchmark. Thereby, the management usually selects a subset of stocks being employed in the optimization procedure. Apart from the optimization procedure, the stock selection approach determines the stock portfolios' out-of-sample performance. The empirical study here takes into account the Danish stock market from 2000-2010 and gives evidence that stock portfolios including small companies' stocks being estimated via coin...

  2. An Approximate Dynamic Programming Mode for Optimal MEDEVAC Dispatching

    Science.gov (United States)

    2015-03-26

    over the myopic policy. This indicates the ADP policy is efficiently managing resources by 28 not immediately sending the nearest available MEDEVAC...DISPATCHING THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology...medical evacuation (MEDEVAC) dispatch policies. To solve the MDP, we apply an ap- proximate dynamic programming (ADP) technique. The problem of deciding

  3. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A materials selection procedure for sandwiched beams via parametric optimization with applications in automotive industry

    International Nuclear Information System (INIS)

    Aly, Mohamed F.; Hamza, Karim T.; Farag, Mahmoud M.

    2014-01-01

    Highlights: • Sandwich panels optimization model. • Sandwich panels design procedure. • Study of sandwich panels for automotive vehicle flooring. • Study of sandwich panels for truck cabin exterior. - Abstract: The future of automotive industry faces many challenges in meeting increasingly strict restrictions on emissions, energy usage and recyclability of components alongside the need to maintain cost competiveness. Weight reduction through innovative design of components and proper material selection can have profound impact towards attaining such goals since most of the lifecycle energy usage occurs during the operation phase of a vehicle. In electric and hybrid vehicles, weight reduction has another important effect of extending the electric mode driving range between stops or gasoline mode. This paper adopts parametric models for design optimization and material selection of sandwich panels with the objective of weight and cost minimization subject to structural integrity constraints such as strength, stiffness and buckling resistance. The proposed design procedure employs a pre-compiled library of candidate sandwich panel material combinations, for which optimization of the layered thicknesses is conducted and the best one is reported. Example demonstration studies from the automotive industry are presented for the replacement of Aluminum and Steel panels with polypropylene-filled sandwich panel alternatives

  5. Optimal Corridor Selection for a Road Space Management Strategy: Methodology and Tool

    Directory of Open Access Journals (Sweden)

    Sushant Sharma

    2017-01-01

    Full Text Available Nationwide, there is a growing realization that there are valuable benefits to using the existing roadway facilities to their full potential rather than expanding capacity in a traditional way. Currently, state DOTs are looking for cost-effective transportation solutions to mitigate the growing congestion and increasing funding gaps. Innovative road space management strategies like narrowing of multiple lanes (three or more and shoulder width to add a lane enhance the utilization while eliminating the costs associated with constructing new lanes. Although this strategy (among many generally leads to better mobility, identifying optimal corridors is a challenge and may affect the benefits. Further, there is a likelihood that added capacity may provide localized benefits, at the expense of system level performance measures (travel time and crashes because of the relocation of traffic operational bottlenecks. This paper develops a novel transportation programming and investment decision method to identify optimal corridors for adding capacity in the network by leveraging lane widths. The methodology explicitly takes into consideration the system level benefits and safety. The programming compares two conflicting objectives of system travel time and safety benefits to find an optimal solution.

  6. Fuzzy preference based interactive fuzzy physical programming and its application in multi-objective optimization

    International Nuclear Information System (INIS)

    Zhang, Xu; Huang, Hong Zhong; Yu, Lanfeng

    2006-01-01

    Interactive Fuzzy Physical Programming (IFPP) developed in this paper is a new efficient multi-objective optimization method, which retains the advantages of physical programming while considering the fuzziness of the designer's preferences. The fuzzy preference function is introduced based on the model of linear physical programming, which is used to guide the search for improved solutions by interactive decision analysis. The example of multi-objective optimization design of the spindle of internal grinder demonstrates that the improved preference conforms to the subjective desires of the designer

  7. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of measurement programs devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program...

  8. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of a measured program devoted to parameter identification of structural dynamic systems is considered, the design problem is formulated as an optimization problem due to minimize the total expected cost of the measurement program. All the calculations are based on a priori knowledge...... and engineering judgement. One of the contribution of the approach is that the optimal nmber of sensors can be estimated. This is sown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program for estimating the modal damping parameters...

  9. Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control

    Energy Technology Data Exchange (ETDEWEB)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au [Macquarie University, Department of Mathematics (Australia); Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au [Flinders University, Flinders Mathematical Sciences Laboratory, School of Computer Science, Engineering and Mathematics (Australia)

    2015-04-15

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.

  10. Modeling for deformable mirrors and the adaptive optics optimization program

    International Nuclear Information System (INIS)

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-01-01

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language

  11. Integrating packing and distribution problems and optimization through mathematical programming

    Directory of Open Access Journals (Sweden)

    Fabio Miguel

    2016-06-01

    Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.

  12. A screening method for the optimal selection of plate heat exchanger configurations

    Directory of Open Access Journals (Sweden)

    Pinto J.M.

    2002-01-01

    Full Text Available An optimization method for determining the best configuration(s of gasketed plate heat exchangers is presented. The objective is to select the configuration(s with the minimum heat transfer area that still satisfies constraints on the number of channels, the pressure drop of both fluids, the channel flow velocities and the exchanger thermal effectiveness. The configuration of the exchanger is defined by six parameters, which are as follows: the number of channels, the numbers of passes on each side, the fluid locations, the feed positions and the type of flow in the channels. The resulting configuration optimization problem is formulated as the minimization of the exchanger heat transfer area and a screening procedure is proposed for its solution. In this procedure, subsets of constraints are successively applied to eliminate infeasible and nonoptimal solutions. Examples show that the optimization method is able to successfully determine a set of optimal configurations with a minimum number of exchanger evaluations. Approximately 5 % of the pressure drop and channel velocity calculations and 1 % of the thermal simulations are required for the solution.

  13. Joint Antenna Selection and Precoding Optimization for Small-Cell Network with Minimum Power Consumption

    Directory of Open Access Journals (Sweden)

    Qiang Sun

    2017-01-01

    Full Text Available We focus on the power consumption problem for a downlink multiuser small-cell network (SCN considering both the quality of service (QoS and power constraints. First based on a practical power consumption model taking into account both the dynamic transmit power and static circuit power, we formulate and then transform the power consumption optimization problem into a convex problem by using semidefinite relaxation (SDR technique and obtain the optimal solution by the CVX tool. We further note that the SDR-based solution becomes infeasible for realistic implementation due to its heavy backhaul burden and computational complexity. To this end, we propose an alternative suboptimal algorithm which has low implementation overhead and complexity, based on minimum mean square error (MMSE precoding. Furthermore, we propose a distributed correlation-based antenna selection (DCAS algorithm combining with our optimization algorithms to reduce the static circuit power consumption for the SCN. Finally, simulation results demonstrate that our proposed suboptimal algorithm is very effective on power consumption minimization, with significantly reduced backhaul burden and computational complexity. Moreover, we show that our optimization algorithms with DCAS have less power consumption than the other benchmark algorithms.

  14. An integrated approach of topology optimized design and selective laser melting process for titanium implants materials.

    Science.gov (United States)

    Xiao, Dongming; Yang, Yongqiang; Su, Xubin; Wang, Di; Sun, Jianfeng

    2013-01-01

    The load-bearing bone implants materials should have sufficient stiffness and large porosity, which are interacted since larger porosity causes lower mechanical properties. This paper is to seek the maximum stiffness architecture with the constraint of specific volume fraction by topology optimization approach, that is, maximum porosity can be achieved with predefine stiffness properties. The effective elastic modulus of conventional cubic and topology optimized scaffolds were calculated using finite element analysis (FEA) method; also, some specimens with different porosities of 41.1%, 50.3%, 60.2% and 70.7% respectively were fabricated by Selective Laser Melting (SLM) process and were tested by compression test. Results showed that the computational effective elastic modulus of optimized scaffolds was approximately 13% higher than cubic scaffolds, the experimental stiffness values were reduced by 76% than the computational ones. The combination of topology optimization approach and SLM process would be available for development of titanium implants materials in consideration of both porosity and mechanical stiffness.

  15. A Hybrid Programming Framework for Modeling and Solving Constraint Satisfaction and Optimization Problems

    OpenAIRE

    Sitek, Paweł; Wikarek, Jarosław

    2016-01-01

    This paper proposes a hybrid programming framework for modeling and solving of constraint satisfaction problems (CSPs) and constraint optimization problems (COPs). Two paradigms, CLP (constraint logic programming) and MP (mathematical programming), are integrated in the framework. The integration is supplemented with the original method of problem transformation, used in the framework as a presolving method. The transformation substantially reduces the feasible solution space. The framework a...

  16. Exploring Heuristic Action Selection in Agent Programming (extended abstract)

    NARCIS (Netherlands)

    Hindriks, K.V.; Jonker, C.M.; Pasman, W.

    2008-01-01

    Rational agents programmed in agent programming languages derive their choice of action from their beliefs and goals. One of the main benefits of such programming languages is that they facilitate a highlevel and conceptually elegant specification of agent behaviour. Qualitative concepts alone,

  17. Application Of Database Program in selecting Sorghum (Sorghum bicolor L) Mutant Lines

    International Nuclear Information System (INIS)

    H, Soeranto

    2000-01-01

    Computer database software namely MSTAT and paradox have been exercised in the field of mutation breeding especially in the process of selecting plant mutant lines of sorghum. In MSTAT, selecting mutant lines can be done by activating the SELECTION function and then followed by entering mathematical formulas for the selection criterion. Another alternative is by defining the desired selection intensity to the analysis results of subprogram SORT. Including the selected plant mutant lines in BRSERIES program, it will make their progenies be easier to be traced in subsequent generations. In paradox, an application program for selecting mutant lines can be made by combining facilities of Table, form and report. Selecting mutant lines with defined selection criterion can easily be done through filtering data. As a relation database, paradox ensures that the application program for selecting mutant lines and progeny trachings, can be made easier, efficient and interactive

  18. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    International Nuclear Information System (INIS)

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  19. Optimization of fuel-cell tram operation based on two dimension dynamic programming

    Science.gov (United States)

    Zhang, Wenbin; Lu, Xuecheng; Zhao, Jingsong; Li, Jianqiu

    2018-02-01

    This paper proposes an optimal control strategy based on the two-dimension dynamic programming (2DDP) algorithm targeting at minimizing operation energy consumption for a fuel-cell tram. The energy consumption model with the tram dynamics is firstly deduced. Optimal control problem are analyzed and the 2DDP strategy is applied to solve the problem. The optimal tram speed profiles are obtained for each interstation which consist of three stages: accelerate to the set speed with the maximum traction power, dynamically adjust to maintain a uniform speed and decelerate to zero speed with the maximum braking power at a suitable timing. The optimal control curves of all the interstations are connected with the parking time to form the optimal control method of the whole line. The optimized speed profiles are also simplified for drivers to follow.

  20. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account.

  1. A Hybrid Optimized Weighted Minimum Spanning Tree for the Shortest Intrapath Selection in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Matheswaran Saravanan

    2014-01-01

    Full Text Available Wireless sensor network (WSN consists of sensor nodes that need energy efficient routing techniques as they have limited battery power, computing, and storage resources. WSN routing protocols should enable reliable multihop communication with energy constraints. Clustering is an effective way to reduce overheads and when this is aided by effective resource allocation, it results in reduced energy consumption. In this work, a novel hybrid evolutionary algorithm called Bee Algorithm-Simulated Annealing Weighted Minimal Spanning Tree (BASA-WMST routing is proposed in which randomly deployed sensor nodes are split into the best possible number of independent clusters with cluster head and optimal route. The former gathers data from sensors belonging to the cluster, forwarding them to the sink. The shortest intrapath selection for the cluster is selected using Weighted Minimum Spanning Tree (WMST. The proposed algorithm computes the distance-based Minimum Spanning Tree (MST of the weighted graph for the multihop network. The weights are dynamically changed based on the energy level of each sensor during route selection and optimized using the proposed bee algorithm simulated annealing algorithm.

  2. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    Science.gov (United States)

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  3. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  4. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence.

    Science.gov (United States)

    Li, Sui-Xian

    2018-05-07

    Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI). However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ₂ norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  5. Feature Selection and Parameter Optimization of Support Vector Machines Based on Modified Artificial Fish Swarm Algorithms

    Directory of Open Access Journals (Sweden)

    Kuan-Cheng Lin

    2015-01-01

    Full Text Available Rapid advances in information and communication technology have made ubiquitous computing and the Internet of Things popular and practicable. These applications create enormous volumes of data, which are available for analysis and classification as an aid to decision-making. Among the classification methods used to deal with big data, feature selection has proven particularly effective. One common approach involves searching through a subset of the features that are the most relevant to the topic or represent the most accurate description of the dataset. Unfortunately, searching through this kind of subset is a combinatorial problem that can be very time consuming. Meaheuristic algorithms are commonly used to facilitate the selection of features. The artificial fish swarm algorithm (AFSA employs the intelligence underlying fish swarming behavior as a means to overcome optimization of combinatorial problems. AFSA has proven highly successful in a diversity of applications; however, there remain shortcomings, such as the likelihood of falling into a local optimum and a lack of multiplicity. This study proposes a modified AFSA (MAFSA to improve feature selection and parameter optimization for support vector machine classifiers. Experiment results demonstrate the superiority of MAFSA in classification accuracy using subsets with fewer features for given UCI datasets, compared to the original FASA.

  6. Filter Selection for Optimizing the Spectral Sensitivity of Broadband Multispectral Cameras Based on Maximum Linear Independence

    Directory of Open Access Journals (Sweden)

    Sui-Xian Li

    2018-05-01

    Full Text Available Previous research has shown that the effectiveness of selecting filter sets from among a large set of commercial broadband filters by a vector analysis method based on maximum linear independence (MLI. However, the traditional MLI approach is suboptimal due to the need to predefine the first filter of the selected filter set to be the maximum ℓ2 norm among all available filters. An exhaustive imaging simulation with every single filter serving as the first filter is conducted to investigate the features of the most competent filter set. From the simulation, the characteristics of the most competent filter set are discovered. Besides minimization of the condition number, the geometric features of the best-performed filter set comprise a distinct transmittance peak along the wavelength axis of the first filter, a generally uniform distribution for the peaks of the filters and substantial overlaps of the transmittance curves of the adjacent filters. Therefore, the best-performed filter sets can be recognized intuitively by simple vector analysis and just a few experimental verifications. A practical two-step framework for selecting optimal filter set is recommended, which guarantees a significant enhancement of the performance of the systems. This work should be useful for optimizing the spectral sensitivity of broadband multispectral imaging sensors.

  7. Moral Hazard, Adverse Selection and the Optimal Consumption-Leisure Choice under Equilibrium Price Dispersion

    Directory of Open Access Journals (Sweden)

    Sergey Malakhov

    2017-09-01

    Full Text Available The analysis of the optimal consumption-leisure choice under equilibrium price dispersion discovers the methodological difference between problems of moral hazard and adverse selection. While the phenomenon of moral hazard represents the individual behavioral reaction on the marginal rate of substitution of leisure for consumption proposed by the insurance policy, the adverse selection can take place on any imperfect market under equilibrium price dispersion and it looks like a market phenomenon of a natural selection between consumers with different income and different propensity to search. The analysis of health insurance where the propensity to search takes the form of the propensity to seek healthcare demonstrates that moral hazard takes place when the insurance policy proposes a suboptimal consumption-leisure choice and the increase in consumption of medical services with the reduction of leisure time represents not an unlimited demand for “free goods” but the simple process of the consumption-leisure optimization. The path of consumerism with consumer-directed plans can solve partly the problem of moral hazard because in order to eliminate moral hazard this trend should come to the re-sale of medical services under health vouchers like it takes place in the life settlement.

  8. Optimality Conditions for Nondifferentiable Multiobjective Semi-Infinite Programming Problems

    Directory of Open Access Journals (Sweden)

    D. Barilla

    2016-01-01

    Full Text Available We have considered a multiobjective semi-infinite programming problem with a feasible set defined by inequality constraints. First we studied a Fritz-John type necessary condition. Then, we introduced two constraint qualifications and derive the weak and strong Karush-Kuhn-Tucker (KKT in brief types necessary conditions for an efficient solution of the considered problem. Finally an extension of a Caristi-Ferrara-Stefanescu result for the (Φ,ρ-invexity is proved, and some sufficient conditions are presented under this weak assumption. All results are given in terms of Clark subdifferential.

  9. Optimal Training for Time-Selective Wireless Fading Channels Using Cutoff Rate

    Directory of Open Access Journals (Sweden)

    Tong Lang

    2006-01-01

    Full Text Available We consider the optimal allocation of resources—power and bandwidth—between training and data transmissions for single-user time-selective Rayleigh flat-fading channels under the cutoff rate criterion. The transmitter exploits statistical channel state information (CSI in the form of the channel Doppler spectrum to embed pilot symbols into the transmission stream. At the receiver, instantaneous, though imperfect, CSI is acquired through minimum mean-square estimation of the channel based on some set of pilot observations. We compute the ergodic cutoff rate for this scenario. Assuming estimator-based interleaving and -PSK inputs, we study two special cases in-depth. First, we derive the optimal resource allocation for the Gauss-Markov correlation model. Next, we validate and refine these insights by studying resource allocation for the Jakes model.

  10. Gas load forecasting based on optimized fuzzy c-mean clustering analysis of selecting similar days

    Directory of Open Access Journals (Sweden)

    Qiu Jing

    2017-08-01

    Full Text Available Traditional fuzzy c-means (FCM clustering in short term load forecasting method is easy to fall into local optimum and is sensitive to the initial cluster center.In this paper,we propose to use global search feature of particle swarm optimization (PSO algorithm to avoid these shortcomings,and to use FCM optimization to select similar date of forecast as training sample of support vector machines.This will not only strengthen the data rule of training samples,but also ensure the consistency of data characteristics.Experimental results show that the prediction accuracy of this prediction model is better than that of BP neural network and support vector machine (SVM algorithms.

  11. OPTIMAL BUSINESS DECISION SYSTEM FOR MULTINATIONALS: A MULTIFACTOR ANALYSIS OF SELECTED MANUFACTURING FIRMS

    Directory of Open Access Journals (Sweden)

    Oforegbunam Thaddeus Ebiringa

    2011-03-01

    Full Text Available Traditional MIS has been made more effective through the integration of organization, human andtechnology factors into a decision matrix. The study is motivated by the need to find an optimal mixof interactive factors that will optimize the result of decision to apply ICT to manufacturingprocesses. The study used Factor analysis model based on the sampled opinion of forty (40operations/production managers and two thousand (2000 production line workers of three leadingmanufacturing firms: Uniliver Plc., PZ Plc, and Nigerian Breweries Plc operating in Aba IndustrialEstate of Nigeria. The results shows that a progressive mixed factor loading matrix, based on thepreferred ordered importance of resources factors in the formulation, implementation, monitoring,control and evaluation of ICT projects of the selected firms led to an average capability improvementof 0.764 in decision efficiency. This is considered strategic for achieving balanced corporate growthand development.

  12. PORTFOLIO SELECTION OF INFORMATION SYSTEMS PROJECTS USING PROMETHEE V WITH C-OPTIMAL CONCEPT

    Directory of Open Access Journals (Sweden)

    Jonatas A. de Almeida

    2014-05-01

    Full Text Available This paper presents a multicriteria decision model for selecting a portfolio of information system (IS projects, which integrates strategic and organizational view within a multicriteria decision structure. The PROMETHEE V method, based on outranking relations is applied, considering the c-optimal concept in order to overcome some scaling problems found in the classical PROMETHEE V approach. Then, a procedure is proposed in order to make a final analysis of the c-optimal portfolios found as a result of using PROMETHEE V. Also, the organizational view is discussed, including some factors that may influence decision making on IS projects to be included in the portfolio, such as adding the company's strategic vision and technical aspects that demonstrate how IS contributes value to a company's business.

  13. A Novel Cluster Head Selection Algorithm Based on Fuzzy Clustering and Particle Swarm Optimization.

    Science.gov (United States)

    Ni, Qingjian; Pan, Qianqian; Du, Huimin; Cao, Cen; Zhai, Yuqing

    2017-01-01

    An important objective of wireless sensor network is to prolong the network life cycle, and topology control is of great significance for extending the network life cycle. Based on previous work, for cluster head selection in hierarchical topology control, we propose a solution based on fuzzy clustering preprocessing and particle swarm optimization. More specifically, first, fuzzy clustering algorithm is used to initial clustering for sensor nodes according to geographical locations, where a sensor node belongs to a cluster with a determined probability, and the number of initial clusters is analyzed and discussed. Furthermore, the fitness function is designed considering both the energy consumption and distance factors of wireless sensor network. Finally, the cluster head nodes in hierarchical topology are determined based on the improved particle swarm optimization. Experimental results show that, compared with traditional methods, the proposed method achieved the purpose of reducing the mortality rate of nodes and extending the network life cycle.

  14. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  15. Impact of cultivar selection and process optimization on ethanol yield from different varieties of sugarcane

    Science.gov (United States)

    2014-01-01

    Background The development of ‘energycane’ varieties of sugarcane is underway, targeting the use of both sugar juice and bagasse for ethanol production. The current study evaluated a selection of such ‘energycane’ cultivars for the combined ethanol yields from juice and bagasse, by optimization of dilute acid pretreatment optimization of bagasse for sugar yields. Method A central composite design under response surface methodology was used to investigate the effects of dilute acid pretreatment parameters followed by enzymatic hydrolysis on the combined sugar yield of bagasse samples. The pressed slurry generated from optimum pretreatment conditions (maximum combined sugar yield) was used as the substrate during batch and fed-batch simultaneous saccharification and fermentation (SSF) processes at different solid loadings and enzyme dosages, aiming to reach an ethanol concentration of at least 40 g/L. Results Significant variations were observed in sugar yields (xylose, glucose and combined sugar yield) from pretreatment-hydrolysis of bagasse from different cultivars of sugarcane. Up to 33% difference in combined sugar yield between best performing varieties and industrial bagasse was observed at optimal pretreatment-hydrolysis conditions. Significant improvement in overall ethanol yield after SSF of the pretreated bagasse was also observed from the best performing varieties (84.5 to 85.6%) compared to industrial bagasse (74.5%). The ethanol concentration showed inverse correlation with lignin content and the ratio of xylose to arabinose, but it showed positive correlation with glucose yield from pretreatment-hydrolysis. The overall assessment of the cultivars showed greater improvement in the final ethanol concentration (26.9 to 33.9%) and combined ethanol yields per hectare (83 to 94%) for the best performing varieties with respect to industrial sugarcane. Conclusions These results suggest that the selection of sugarcane variety to optimize ethanol

  16. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  17. Pretreatment of wastewater: Optimal coagulant selection using Partial Order Scaling Analysis (POSA)

    International Nuclear Information System (INIS)

    Tzfati, Eran; Sein, Maya; Rubinov, Angelika; Raveh, Adi; Bick, Amos

    2011-01-01

    Jar-test is a well-known tool for chemical selection for physical-chemical wastewater treatment. Jar test results show the treatment efficiency in terms of suspended matter and organic matter removal. However, in spite of having all these results, coagulant selection is not an easy task because one coagulant can remove efficiently the suspended solids but at the same time increase the conductivity. This makes the final selection of coagulants very dependent on the relative importance assigned to each measured parameter. In this paper, the use of Partial Order Scaling Analysis (POSA) and multi-criteria decision analysis is proposed to help the selection of the coagulant and its concentration in a sequencing batch reactor (SBR). Therefore, starting from the parameters fixed by the jar-test results, these techniques will allow to weight these parameters, according to the judgments of wastewater experts, and to establish priorities among coagulants. An evaluation of two commonly used coagulation/flocculation aids (Alum and Ferric Chloride) was conducted and based on jar tests and POSA model, Ferric Chloride (100 ppm) was the best choice. The results obtained show that POSA and multi-criteria techniques are useful tools to select the optimal chemicals for the physical-technical treatment.

  18. POBE: A Computer Program for Optimal Design of Multi-Subject Blocked fMRI Experiments

    Directory of Open Access Journals (Sweden)

    Bärbel Maus

    2014-01-01

    Full Text Available For functional magnetic resonance imaging (fMRI studies, researchers can use multi-subject blocked designs to identify active brain regions for a certain stimulus type of interest. Before performing such an experiment, careful planning is necessary to obtain efficient stimulus effect estimators within the available financial resources. The optimal number of subjects and the optimal scanning time for a multi-subject blocked design with fixed experimental costs can be determined using optimal design methods. In this paper, the user-friendly computer program POBE 1.2 (program for optimal design of blocked experiments, version 1.2 is presented. POBE provides a graphical user interface for fMRI researchers to easily and efficiently design their experiments. The computer program POBE calculates the optimal number of subjects and the optimal scanning time for user specified experimental factors and model parameters so that the statistical efficiency is maximised for a given study budget. POBE can also be used to determine the minimum budget for a given power. Furthermore, a maximin design can be determined as efficient design for a possible range of values for the unknown model parameters. In this paper, the computer program is described and illustrated with typical experimental factors for a blocked fMRI experiment.

  19. Extracting fetal heart beats from maternal abdominal recordings: selection of the optimal principal components

    International Nuclear Information System (INIS)

    Di Maria, Costanzo; Liu, Chengyu; Zheng, Dingchang; Murray, Alan; Langley, Philip

    2014-01-01

    This study presents a systematic comparison of different approaches to the automated selection of the principal components (PC) which optimise the detection of maternal and fetal heart beats from non-invasive maternal abdominal recordings. A public database of 75 4-channel non-invasive maternal abdominal recordings was used for training the algorithm. Four methods were developed and assessed to determine the optimal PC: (1) power spectral distribution, (2) root mean square, (3) sample entropy, and (4) QRS template. The sensitivity of the performance of the algorithm to large-amplitude noise removal (by wavelet de-noising) and maternal beat cancellation methods were also assessed. The accuracy of maternal and fetal beat detection was assessed against reference annotations and quantified using the detection accuracy score F1 [2*PPV*Se / (PPV + Se)], sensitivity (Se), and positive predictive value (PPV). The best performing implementation was assessed on a test dataset of 100 recordings and the agreement between the computed and the reference fetal heart rate (fHR) and fetal RR (fRR) time series quantified. The best performance for detecting maternal beats (F1 99.3%, Se 99.0%, PPV 99.7%) was obtained when using the QRS template method to select the optimal maternal PC and applying wavelet de-noising. The best performance for detecting fetal beats (F1 89.8%, Se 89.3%, PPV 90.5%) was obtained when the optimal fetal PC was selected using the sample entropy method and utilising a fixed-length time window for the cancellation of the maternal beats. The performance on the test dataset was 142.7 beats 2 /min 2 for fHR and 19.9 ms for fRR, ranking respectively 14 and 17 (out of 29) when compared to the other algorithms presented at the Physionet Challenge 2013. (paper)

  20. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  1. A reliable computational workflow for the selection of optimal screening libraries.

    Science.gov (United States)

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  2. Uncertain and multi-objective programming models for crop planting structure optimization

    Directory of Open Access Journals (Sweden)

    Mo LI,Ping GUO,Liudong ZHANG,Chenglong ZHANG

    2016-03-01

    Full Text Available Crop planting structure optimization is a significant way to increase agricultural economic benefits and improve agricultural water management. The complexities of fluctuating stream conditions, varying economic profits, and uncertainties and errors in estimated modeling parameters, as well as the complexities among economic, social, natural resources and environmental aspects, have led to the necessity of developing optimization models for crop planting structure which consider uncertainty and multi-objectives elements. In this study, three single-objective programming models under uncertainty for crop planting structure optimization were developed, including an interval linear programming model, an inexact fuzzy chance-constrained programming (IFCCP model and an inexact fuzzy linear programming (IFLP model. Each of the three models takes grayness into account. Moreover, the IFCCP model considers fuzzy uncertainty of parameters/variables and stochastic characteristics of constraints, while the IFLP model takes into account the fuzzy uncertainty of both constraints and objective functions. To satisfy the sustainable development of crop planting structure planning, a fuzzy-optimization-theory-based fuzzy linear multi-objective programming model was developed, which is capable of reflecting both uncertainties and multi-objective. In addition, a multi-objective fractional programming model for crop structure optimization was also developed to quantitatively express the multi-objective in one optimization model with the numerator representing maximum economic benefits and the denominator representing minimum crop planting area allocation. These models better reflect actual situations, considering the uncertainties and multi-objectives of crop planting structure optimization systems. The five models developed were then applied to a real case study in Minqin County, north-west China. The advantages, the applicable conditions and the solution methods

  3. 76 FR 62312 - Multi-Agency Informational Meeting Concerning Compliance With the Federal Select Agent Program...

    Science.gov (United States)

    2011-10-07

    ... interested individuals to obtain specific regulatory guidance and information on standards concerning biosafety and biosecurity issues related to the Federal Select Agent Program. CDC, APHIS, and CJIS...

  4. A Value-Added Approach to Selecting the Best Master of Business Administration (MBA) Program

    Science.gov (United States)

    Fisher, Dorothy M.; Kiang, Melody; Fisher, Steven A.

    2007-01-01

    Although numerous studies rank master of business administration (MBA) programs, prospective students' selection of the best MBA program is a formidable task. In this study, the authors used a linear-programming-based model called data envelopment analysis (DEA) to evaluate MBA programs. The DEA model connects costs to benefits to evaluate the…

  5. Heuristic Optimization Approach to Selecting a Transport Connection in City Public Transport

    Directory of Open Access Journals (Sweden)

    Kul’ka Jozef

    2017-02-01

    Full Text Available The article presents a heuristic optimization approach to select a suitable transport connection in the framework of a city public transport. This methodology was applied on a part of the public transport in Košice, because it is the second largest city in the Slovak Republic and its network of the public transport creates a complex transport system, which consists of three different transport modes, namely from the bus transport, tram transport and trolley-bus transport. This solution focused on examining the individual transport services and their interconnection in relevant interchange points.

  6. Optimization of Sex Ratio in a Selection Plan for Palas Prolificacy Line

    Directory of Open Access Journals (Sweden)

    Răzvan Popa

    2011-05-01

    Full Text Available The aim of the paper work is to optimize the sex ratio in a selection plan, according to model developed by King (1961, which will be proposed to be applied for prolificacy improvement in Prolific Line Palas. The method used in this paper work is modeling, which exist in the most animal breeding scientifically papers. After the simulations, we observed that the most convenient variant was that which prefigure use of 13 rams on reproduction activity. This variant offer a genetic gain per generation by 0.47497 additive standard deviations.

  7. Selective methodology of population dynamics for optimizing a multiobjective environment of job shop production

    Directory of Open Access Journals (Sweden)

    Santiago Ruiz

    2015-01-01

    Full Text Available This paper develops a methodology based on population genetics to improve the performance of two or more variables in job shop production systems. The methodology applies a genetic algorithm with special features in the individual selection when they pass from generation to generation. In comparison with the FIFO method, the proposed methodology showed better results in the variables makespan, idle time and energy cost. When compared with NSGA II, the methodology did not showed relevant differences in makespan and idle time; however better performance was obtained in energy cost and, especially, in the number of required iterations to get the optimal makespan.

  8. Decision-Making Approach to Selecting Optimal Platform of Service Variants

    Directory of Open Access Journals (Sweden)

    Vladimir Modrak

    2016-01-01

    Full Text Available Nowadays, it is anticipated that service sector companies will be inspired to follow mass customization trends of industrial sector. However, services are more abstract than products and therefore concepts for mass customization in manufacturing domain cannot be transformed without a methodical change. This paper is focused on the development of a methodological framework to support decisions in a selection of optimal platform of service variants when compatibility problems between service options occurred. The approach is based on mutual relations between waste and constrained design space entropy. For this purpose, software for quantification of constrained and waste design space is developed. Practicability of the methodology is presented on a realistic case.

  9. Selection and optimization of mooring cables on floating platform for special purposes

    Science.gov (United States)

    Ma, Guang-ying; Yao, Yun-long; Zhao, Chen-yao

    2017-08-01

    This paper studied a new type of assembled marine floating platform for special purposes. The selection and optimization of mooring cables on the floating platform are studied. By using ANSYS AQWA software, the hydrodynamic model of the platform was established to calculate the time history response of the platform motion under complex water environments, such as wind, wave, current and mooring. On this basis, motion response and cable tension were calculated with different cable mooring states under the designed environmental load. Finally, the best mooring scheme to meet the cable strength requirements was proposed, which can lower the motion amplitude of the platform effectively.

  10. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Science.gov (United States)

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  11. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2015-07-01

    Full Text Available In order to recycle and dispose of all people’s expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  12. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming.

    Science.gov (United States)

    Huang, Hui; Li, Yuyu; Huang, Bo; Pi, Xing

    2015-07-09

    In order to recycle and dispose of all people's expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA) is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  13. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  14. Multi-objective optimization of cellular scanning strategy in selective laser melting

    DEFF Research Database (Denmark)

    Ahrari, Ali; Deb, Kalyanmoy; Mohanty, Sankhya

    2017-01-01

    The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study is to dev......The scanning strategy for selective laser melting - an additive manufacturing process - determines the temperature fields during the manufacturing process, which in turn affects residual stresses and distortions, two of the main sources of process-induced defects. The goal of this study......, the problem is a combination of combinatorial and choice optimization, which makes the problem difficult to solve. On a process simulation domain consisting of 32 cells, our multi-objective evolutionary method is able to find a set of trade-off solutions for the defined conflicting objectives, which cannot...

  15. Variationally optimal selection of slow coordinates and reaction coordinates in macromolecular systems

    Science.gov (United States)

    Noe, Frank

    To efficiently simulate and generate understanding from simulations of complex macromolecular systems, the concept of slow collective coordinates or reaction coordinates is of fundamental importance. Here we will introduce variational approaches to approximate the slow coordinates and the reaction coordinates between selected end-states given MD simulations of the macromolecular system and a (possibly large) basis set of candidate coordinates. We will then discuss how to select physically intuitive order paremeters that are good surrogates of this variationally optimal result. These result can be used in order to construct Markov state models or other models of the stationary and kinetics properties, in order to parametrize low-dimensional / coarse-grained model of the dynamics. Deutsche Forschungsgemeinschaft, European Research Council.

  16. AN APPLICATION OF FUZZY PROMETHEE METHOD FOR SELECTING OPTIMAL CAR PROBLEM

    Directory of Open Access Journals (Sweden)

    SERKAN BALLI

    2013-06-01

    Full Text Available Most of the economical, industrial, financial or political decision problems are multi-criteria. In these multi criteria problems, optimal selection of alternatives is hard and complex process. Recently, some kinds of methods are improved to solve these problems. Promethee is one of most efficient and easiest method and solves problems that consist quantitative criteria.  However, in daily life, there are criteria which are explained as linguistic and cannot modeled numerical. Hence, Promethee method is incomplete for linguistic criteria which are imprecise. To satisfy this deficiency, fuzzy set approximation can be used. Promethee method, which is extended with using fuzzy inputs, is applied to car selection for seven different cars in same class by using criteria: price, fuel, performance and security. The obtained results are appropriate and consistent.

  17. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    Science.gov (United States)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore

  18. Computer Program for Analysis, Design and Optimization of Propulsion, Dynamics, and Kinematics of Multistage Rockets

    Science.gov (United States)

    Lali, Mehdi

    2009-03-01

    A comprehensive computer program is designed in MATLAB to analyze, design and optimize the propulsion, dynamics, thermodynamics, and kinematics of any serial multi-staging rocket for a set of given data. The program is quite user-friendly. It comprises two main sections: "analysis and design" and "optimization." Each section has a GUI (Graphical User Interface) in which the rocket's data are entered by the user and by which the program is run. The first section analyzes the performance of the rocket that is previously devised by the user. Numerous plots and subplots are provided to display the performance of the rocket. The second section of the program finds the "optimum trajectory" via billions of iterations and computations which are done through sophisticated algorithms using numerical methods and incremental integrations. Innovative techniques are applied to calculate the optimal parameters for the engine and designing the "optimal pitch program." This computer program is stand-alone in such a way that it calculates almost every design parameter in regards to rocket propulsion and dynamics. It is meant to be used for actual launch operations as well as educational and research purposes.

  19. Review of tri-generation technologies: Design evaluation, optimization, decision-making, and selection approach

    International Nuclear Information System (INIS)

    Al Moussawi, Houssein; Fardoun, Farouk; Louahlia-Gualous, Hasna

    2016-01-01

    Highlights: • Trigeneration technologies classified and reviewed according to prime movers. • Relevant heat recovery equipment discussed with thermal energy storage. • Trigeneration evaluated based on energy, exergy, economy, environment criteria. • Design, optimization, and decision-making methods classified and presented. • System selection suggested according to user preferences. - Abstract: Electricity, heating, and cooling are the three main components constituting the tripod of energy consumption in residential, commercial, and public buildings all around the world. Their separate generation causes higher fuel consumption, at a time where energy demands and fuel costs are continuously rising. Combined cooling, heating, and power (CCHP) or trigeneration could be a solution for such challenge yielding an efficient, reliable, flexible, competitive, and less pollutant alternative. A variety of trigeneration technologies are available and their proper choice is influenced by the employed energy system conditions and preferences. In this paper, different types of trigeneration systems are classified according to the prime mover, size and energy sequence usage. A leveled selection procedure is subsequently listed in the consecutive sections. The first level contains the applied prime mover technologies which are considered to be the heart of any CCHP system. The second level comprises the heat recovery equipment (heating and cooling) of which suitable selection should be compatible with the used prime mover. The third level includes the thermal energy storage system and heat transfer fluid to be employed. For each section of the paper, a survey of conducted studies with CHP/CCHP implementation is presented. A comprehensive table of evaluation criteria for such systems based on energy, exergy, economy, and environment measures is performed, along with a survey of the methods used in their design, optimization, and decision-making. Moreover, a classification

  20. An Improved Particle Swarm Optimization for Solving Bilevel Multiobjective Programming Problem

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2012-01-01

    Full Text Available An improved particle swarm optimization (PSO algorithm is proposed for solving bilevel multiobjective programming problem (BLMPP. For such problems, the proposed algorithm directly simulates the decision process of bilevel programming, which is different from most traditional algorithms designed for specific versions or based on specific assumptions. The BLMPP is transformed to solve multiobjective optimization problems in the upper level and the lower level interactively by an improved PSO. And a set of approximate Pareto optimal solutions for BLMPP is obtained using the elite strategy. This interactive procedure is repeated until the accurate Pareto optimal solutions of the original problem are found. Finally, some numerical examples are given to illustrate the feasibility of the proposed algorithm.

  1. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    Science.gov (United States)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  2. Chapter 1: Assessing pollinator habitat services to optimize conservation programs

    Science.gov (United States)

    Iovanna, Richard; Ando , Amy W.; Swinton, Scott; Hellerstein, Daniel; Kagan, Jimmy; Mushet, David M.; Otto, Clint R.; Rewa, Charles A.

    2017-01-01

    Pollination services have received increased attention over the past several years, and protecting foraging area is beginning to be reflected in conservation policy. This case study considers the prospects for doing so in a more analytically rigorous manner, by quantifying the pollination services for sites being considered for ecological restoration. The specific policy context is the Conservation Reserve Program (CRP), which offers financial and technical assistance to landowners seeking to convert sensitive cropland back to some semblance of the prairie (or, to a lesser extent, forest or wetland) ecosystem that preceded it. Depending on the mix of grasses and wildflowers that are established, CRP enrollments can provide pollinator habitat. Further, depending on their location, they will generate related services, such as biological control of crop pests, recreation, and aesthetics. While offers to enroll in CRP compete based on cost and some anticipated benefits, the eligibility and ranking criteria do not reflect these services to a meaningful degree. Therefore, we develop a conceptual value diagram to identify the sequence of steps and associated models and data necessary to quantify the full range of services, and find that critical data gaps, some of which are artifacts of policy, preclude the application of benefit-relevant indicators (BRIs) or monetization. However, we also find that there is considerable research activity underway to fill these gaps. In addition, a modeling framework has been developed that can estimate field-level effects on services as a function of landscape context. The approach is inherently scalable and not limited in geographic scope, which is essential for a program with a national footprint. The parameters in this framework are sufficiently straightforward that expert judgment could be applied as a stopgap approach until empirically derived estimates are available. While monetization of benefit-relevant indicators of yield

  3. An Optimal Program Initiative Selection Model for USMC Program Objective Memorandum Planning

    Science.gov (United States)

    1993-03-01

    sales volume. The above examples are only three applications. A complete list would be too long to list here. However, there is continued interest in the...8217WNCII F/D7GI, ’MMS’,’PDLS’,’THER IMAGI’, ’D7G TP PIP’, ’GEN 100KW’, ’RDR CORREL’, ’LSD SWT RP’, ’CIEP’, ’ MK18 CONTA’, ’AMAL OR EQ’, ’IFASC PIP’, ’DIG

  4. A stochastic programming approach towards optimization of biofuel supply chain

    International Nuclear Information System (INIS)

    Azadeh, Ali; Vafa Arani, Hamed; Dashti, Hossein

    2014-01-01

    Bioenergy has been recognized as an important source of energy that will reduce dependency on petroleum. It would have a positive impact on the economy, environment, and society. Production of bioenergy is expected to increase. As a result, we foresee an increase in the number of biorefineries in the near future. This paper analyzes challenges with supplying biomass to a biorefinery and shipping biofuel to demand centers. A stochastic linear programming model is proposed within a multi-period planning framework to maximize the expected profit. The model deals with a time-staged, multi-commodity, production/distribution system, facility locations and capacities, technologies, and material flows. We illustrate the model outputs and discuss the results through numerical examples considering disruptions in biofuel supply chain. Finally, sensitivity analyses are performed to gain managerial insights on how profit changes due to existing uncertainties. - Highlights: • A robust model of biofuel SC is proposed and a sensitivity analysis implemented. • Demand of products is a function of price and GBM (Geometric Brownian Motion) is used for prices of biofuels. • Uncertainties in SC network are captured through defining probabilistic scenarios. • Both traditional feedstock and lignocellulosic biomass are considered for biofuel production. • Developed model is applicable to any related biofuel supply chain regardless of region

  5. Application of linear programming and perturbation theory in optimization of fuel utilization in a nuclear reactor

    International Nuclear Information System (INIS)

    Zavaljevski, N.

    1985-01-01

    Proposed optimization procedure is fast due to application of linear programming. Non-linear constraints which demand iterative application of linear programming are slowing down the calculation. Linearization can be done by different procedures starting from simple empirical rules for fuel in-core management to complicated general perturbation theory with higher order of corrections. A mathematical model was formulated for optimization of improved fuel cycle. A detailed algorithm for determining minimum of fresh fuel at the beginning of each fuel cycle is shown and the problem is linearized by first order perturbation theory and it is optimized by linear programming. Numerical illustration of the proposed method was done for the experimental reactor mostly for saving computer time

  6. CiOpt: a program for optimization of the frequency response of linear circuits

    OpenAIRE

    Miró Sans, Joan Maria; Palà Schönwälder, Pere

    1991-01-01

    An interactive personal-computer program for optimizing the frequency response of linear lumped circuits (CiOpt) is presented. CiOpt has proved to be an efficient tool in improving designs where the inclusion of more accurate device models distorts the desired frequency response, as well as in device modeling. The outputs of CiOpt are the element values which best match the obtained and the desired frequency response. The optimization algorithms used (the Fletcher-Powell and Newton's methods,...

  7. Optimization of hot water transport and distribution networks by analytical method: OPTAL program

    International Nuclear Information System (INIS)

    Barreau, Alain; Caizergues, Robert; Moret-Bailly, Jean

    1977-06-01

    This report presents optimization studies of hot water transport and distribution network by minimizing operating cost. Analytical optimization is used: Lagrange's method of undetermined multipliers. Optimum diameter of each pipe is calculated for minimum network operating cost. The characteristics of the computer program used for calculations, OPTAL, are given in this report. An example of network is calculated and described: 52 branches and 27 customers. Results are discussed [fr

  8. Optimization Models for Reaction Networks: Information Divergence, Quadratic Programming and Kirchhoff’s Laws

    Directory of Open Access Journals (Sweden)

    Julio Michael Stern

    2014-03-01

    Full Text Available This article presents a simple derivation of optimization models for reaction networks leading to a generalized form of the mass-action law, and compares the formal structure of Minimum Information Divergence, Quadratic Programming and Kirchhoff type network models. These optimization models are used in related articles to develop and illustrate the operation of ontology alignment algorithms and to discuss closely connected issues concerning the epistemological and statistical significance of sharp or precise hypotheses in empirical science.

  9. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  10. An ant colony optimization based feature selection for web page classification.

    Science.gov (United States)

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  11. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    Science.gov (United States)

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  12. Selection of optimal multispectral imaging system parameters for small joint arthritis detection

    Science.gov (United States)

    Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija

    2018-02-01

    Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.

  13. Anaerobic digester systems (ADS) for multiple dairy farms: A GIS analysis for optimal site selection

    International Nuclear Information System (INIS)

    Thompson, Ethan; Wang, Qingbin; Li, Minghao

    2013-01-01

    While anaerobic digester systems (ADS) have been increasingly adopted by large dairy farms to generate marketable energy products, like electricity, from animal manure, there is a growing need for assessing the feasibility of regional ADS for multiple farms that are not large enough to capitalize their own ADS. Using geographical information system (GIS) software, this study first identifies potential sites in a dairy region in Vermont, based on geographical conditions, current land use types, and energy distribution infrastructure criteria, and then selects the optimal sites for a given number of ADS, based on the number of dairy farms to be served, the primary energy input to output (PEIO) ratio of ADS, and the existing transportation network. This study suggests that GIS software is a valid technical tool for identifying the potential and optimal sites for ADS. The empirical findings provide useful information for assessing the returns of alternative numbers of ADS in this region, and the research procedures can be modified easily to incorporate any changes in the criteria for this region and can be applied in other regions with different conditions and criteria. - Highlights: • This study examines the feasibility of regional ADS for multiple dairy farms. • GIS is used to identify candidate sites and optimal locations for ADS in a dairy region. • Model includes environmental, social, infrastructure, and energy return criteria. • Empirical analysis provides scenario results on 1–15 ADS in the study region. • Method could be applied to other regions with different conditions and criteria

  14. Optimal complex exponentials BEM and channel estimation in doubly selective channel

    International Nuclear Information System (INIS)

    Song, Lijun; Lei, Xia; Yu, Feng; Jin, Maozhu

    2016-01-01

    Over doubly selective channel, the optimal complex exponentials BEM (CE-BEM) is required to characterize the transmission in transform domain in order to reducing the huge number of the estimated parameters during directly estimating the impulse response in time domain. This paper proposed an improved CE-BEM to alleviating the high frequency sampling error caused by conventional CE-BEM. On the one hand, exploiting the improved CE-BEM, we achieve the sampling point is in the Doppler spread spectrum and the maximum sampling frequency is equal to the maximum Doppler shift. On the other hand we optimize the function and dimension of basis in CE-BEM respectively ,and obtain the closed solution of the EM based channel estimation differential operator by exploiting the above optimal BEM. Finally, the numerical results and theoretic analysis show that the dimension of basis is mainly depend on the maximum Doppler shift and signal-to-noise ratio (SNR), and if fixing the number of the pilot symbol, the dimension of basis is higher, the modeling error is smaller, while the accuracy of the parameter estimation is reduced, which implies that we need to achieve a tradeoff between the modeling error and the accuracy of the parameter estimation and the basis function influences the accuracy of describing the Doppler spread spectrum after identifying the dimension of the basis.

  15. Where and how to manage: Optimal selection of conservation actions for multiple species.

    Directory of Open Access Journals (Sweden)

    Astrid van Teeffelen

    2008-01-01

    Full Text Available Multiple alternative options are frequently available for the protection, maintenance or restoration of conservation areas. The choice of a particular management action can have large effects on the species occurring in the area, because different actions have different effects on different species. Together with the fact that conservation funds are limited and particular management actions are costly, it would be desirable to be able to identify where, and what kind of management should be applied to maximize conservation benefits. Currently available site-selection algorithms can identify the optimal set of sites for a reserve network. However, these algorithms have not been designed to answer what kind of action would be most beneficial at these sites when multiple alternative actions are available. We describe an algorithm capable of solving multi-species planning problems with multiple management options per site. The algorithm is based on benefit functions, which translate the effect of a management action on species representation levels into a value, in order to identify the most beneficial option. We test the performance of this algorithm with simulated data for different types of benefit functions and show that the algorithm’s solutions are optimal, or very near globally optimal, partially depending on the type of benefit function used. The good performance of the proposed algorithm suggests that it could be profitably used for large multi-action multi-species conservation planning problems.

  16. Mechanical Properties of Optimized Diamond Lattice Structure for Bone Scaffolds Fabricated via Selective Laser Melting

    Science.gov (United States)

    Zhang, David Z.; Zhang, Peng; Zhao, Miao; Jafar, Salman

    2018-01-01

    Developments in selective laser melting (SLM) have enabled the fabrication of periodic cellular lattice structures characterized by suitable properties matching the bone tissue well and by fluid permeability from interconnected structures. These multifunctional performances are significantly affected by cell topology and constitutive properties of applied materials. In this respect, a diamond unit cell was designed in particular volume fractions corresponding to the host bone tissue and optimized with a smooth surface at nodes leading to fewer stress concentrations. There were 33 porous titanium samples with different volume fractions, from 1.28 to 18.6%, manufactured using SLM. All of them were performed under compressive load to determine the deformation and failure mechanisms, accompanied by an in-situ approach using digital image correlation (DIC) to reveal stress–strain evolution. The results showed that lattice structures manufactured by SLM exhibited comparable properties to those of trabecular bone, avoiding the effects of stress-shielding and increasing longevity of implants. The curvature of optimized surface can play a role in regulating the relationship between density and mechanical properties. Owing to the release of stress concentration from optimized surface, the failure mechanism of porous titanium has been changed from the pattern of bottom-up collapse by layer (or cell row) to that of the diagonal (45°) shear band, resulting in the significant enhancement of the structural strength. PMID:29510492

  17. Mechanical Properties of Optimized Diamond Lattice Structure for Bone Scaffolds Fabricated via Selective Laser Melting.

    Science.gov (United States)

    Liu, Fei; Zhang, David Z; Zhang, Peng; Zhao, Miao; Jafar, Salman

    2018-03-03

    Developments in selective laser melting (SLM) have enabled the fabrication of periodic cellular lattice structures characterized by suitable properties matching the bone tissue well and by fluid permeability from interconnected structures. These multifunctional performances are significantly affected by cell topology and constitutive properties of applied materials. In this respect, a diamond unit cell was designed in particular volume fractions corresponding to the host bone tissue and optimized with a smooth surface at nodes leading to fewer stress concentrations. There were 33 porous titanium samples with different volume fractions, from 1.28 to 18.6%, manufactured using SLM. All of them were performed under compressive load to determine the deformation and failure mechanisms, accompanied by an in-situ approach using digital image correlation (DIC) to reveal stress-strain evolution. The results showed that lattice structures manufactured by SLM exhibited comparable properties to those of trabecular bone, avoiding the effects of stress-shielding and increasing longevity of implants. The curvature of optimized surface can play a role in regulating the relationship between density and mechanical properties. Owing to the release of stress concentration from optimized surface, the failure mechanism of porous titanium has been changed from the pattern of bottom-up collapse by layer (or cell row) to that of the diagonal (45°) shear band, resulting in the significant enhancement of the structural strength.

  18. Optimization of ISSR-PCR reaction system and selection of primers in Bryum argenteum

    Directory of Open Access Journals (Sweden)

    Ma Xiaoying

    2017-02-01

    Full Text Available In order to determine optimum ISSR-PCR reaction system for moss Bryum argenteum,the concentrations of template DNA primers,dNTPs,Mg2+ and Taq DNA polymerase were optimized in four levels by PCR orthogonal experimental method. The appropriate primers were screened from 100 primers by temperature gradient PCR,and the optimal anneal temperature of the screened primers were determined. The results showed that the optimized 20 μL ISSR-PCR reaction system was as follows:template DNA 20 ng/20 μL,primers 0.45 μmol/L,Mg2+2.65 mmol/L,Taq DNA polymerase 0.4 U/20 μL,dNTPs 0.45 mmol/L. Using this system,50 primers with clear bands,repeatability well and polymorphism highly were selected from 100 primers. The establishment of this system,the screened primers and the annealing temperature could provide a theoretical basis for further research on the genetic diversity of bryophytes using ISSR molecular markers.

  19. Seed selection by dark-eyed juncos (Junco hyemalis): optimal foraging with nutrient constraints?

    Science.gov (United States)

    Thompson, D B; Tomback, D F; Cunningham, M A; Baker, M C

    1987-11-01

    Observations of the foraging behavior of six captive dark-eyed juncos (Junco hyemalis) are used to test the assumptions and predictions of optimal diet choice models (Pyke et al. 1977) that include nutrients (Pulliam 1975). The birds sequentially encountered single seeds of niger thistle (Guizotia abyssinica) and of canary grass (Phalaris canariensis) on an artificial substrate in the laboratory. Niger thistle seeds were preferred by all birds although their profitability in terms of energy intake (J/s) was less than the profitability of canary grass seeds. Of four nutritional components used to calculate profitabilities (mg/s) lipid content was the only characteristic that could explain the junco's seed preference. As predicted by optimal diet theory the probability of consuming niger thistle seeds was independent of seed abundance. However, the consumption of 71-84% rather than 100% of the seeds encountered is not consistent with the prediction of all-or-nothing selection. Canary grass seeds were consumed at a constant rate (no./s) independent of the number of seeds encountered. This consumption pattern invalidates a model that assumes strict maximization. However, it is consistent with the assumption that canary grass seeds contain a nutrient which is required in minimum amounts to meet physiological demands (Pulliam 1975). These experiments emphasize the importance of incorporating nutrients into optimal foraging models and of combining seed preference studies with studies of the metabolic requirements of consumers.

  20. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    Directory of Open Access Journals (Sweden)

    Rolph Houben

    2015-04-01

    Full Text Available Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners. Based on half of the data set, first the sentences (140 out of 311 with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function.