WorldWideScience

Sample records for extremal optimization methods

  1. Extremal Optimization: Methods Derived from Co-Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.

  2. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    We show how composites with extremal or unusual thermal expansion coefficients can be designed using a numerical topology optimization method. The composites are composed of two different material phases and void. The optimization method is illustrated by designing materials having maximum therma...

  3. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    that optimizes an objective function (e.g. thermoelastic properties) subject to certain constraints, such as elastic symmetry or volume fractions of the constituent phases, within a periodic base cell. The effective properties of the material structures are found using the numerical homogenization method based......Composites with extremal or unusual thermal expansion coefficients are designed using a three-phase topology optimization method. The composites are made of two different material phases and a void phase. The topology optimization method consists in finding the distribution of material phases...... microstructures that realize the bounds. For three phases, the optimal microstructures are also compared with new rigorous bounds and again it is shown that the method yields designed materials with thermoelastic properties that are close to the bounds. The three-phase design method is illustrated by designing...

  4. Aero Engine Fault Diagnosis Using an Optimized Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xinyi Yang

    2016-01-01

    Full Text Available A new extreme learning machine optimized by quantum-behaved particle swarm optimization (QPSO is developed in this paper. It uses QPSO to select optimal network parameters including the number of hidden layer neurons according to both the root mean square error on validation data set and the norm of output weights. The proposed Q-ELM was applied to real-world classification applications and a gas turbine fan engine diagnostic problem and was compared with two other optimized ELM methods and original ELM, SVM, and BP method. Results show that the proposed Q-ELM is a more reliable and suitable method than conventional neural network and other ELM methods for the defect diagnosis of the gas turbine engine.

  5. Neighboring extremals of dynamic optimization problems with path equality constraints

    Science.gov (United States)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  6. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  7. Improved extremal optimization for the asymmetric traveling salesman problem

    Science.gov (United States)

    Chen, Yu-Wang; Zhu, Yao-Jia; Yang, Gen-Ke; Lu, Yong-Zai

    2011-11-01

    This paper presents an improved extremal optimization (IEO) algorithm for solving the asymmetric traveling salesman problem (ATSP). At each update step, the IEO algorithm proceeds through two main steps: extremal dynamics and cooperative optimization. As an improvement of extremal optimization (EO), the IEO provides a general combinatorial optimization framework by emphasizing the step of cooperative optimization. In the paper, an effective cooperative optimization strategy with combination of greedy search and random walk is designed in terms of the microscopic characteristics of the ATSP solutions. Simulation results on a set of benchmark ATSP instances show that the proposed IEO algorithm provides satisfactory performance on computational effectiveness and efficiency.

  8. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  9. Extraction method of extreme rainfall data

    Science.gov (United States)

    Zakaria, Roslinazairimah; Radi, Noor Fadhilah Ahmad; Zanariah Satari, Siti

    2017-09-01

    This study is aimed to describe step by step procedure in extracting extreme rainfall data series. Basically, the extraction of extreme rainfall data can be achieved using two methods, block maxima (BM) and peak over threshold (POT) methods. The BM method considers extracting the extreme rainfall data recorded each year during a specific duration, meanwhile the POT method is extracting all extreme rainfall data above a predefined threshold. Using the BM method, the regional pooling of 1-, 3-, 5- and 10-day are used and the maximum rainfall data are chosen among the pooled day within each year. For POT method, two methods are presented. Method 1 of POT method determines a threshold based on 95% percentile while Method 2 determines the threshold graphically using mean residual life plot and threshold stability plot. Based on the selection of the threshold value, a simulation study is conducted to identify the range of appropriate quantile estimate for a proper selection of the threshold value. For illustration of the methodology, daily rainfall data from the rainfall station at Klinik Chalok Barat, Terengganu is chosen. Both methods used are able to identify the extreme rainfall series. This study is important as it helps in identifying the good set of extreme rainfall series for further use such as in extreme rainfall modelling.

  10. Practical methods of optimization

    CERN Document Server

    Fletcher, R

    2013-01-01

    Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev

  11. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  12. Optimizing MRI of small joints and extremities.

    Science.gov (United States)

    Thomas, M S; Greenwood, R; Nolan, C; Malcolm, P N; Toms, A P

    2014-10-01

    Obtaining optimal images of small joints using magnetic resonance imaging (MRI) can be technically challenging. The aim of this review is to outline the practical aspects of MRI of small joints, with reference to the underlying physical principles. Although the most important contribution to successful imaging of small joints comes from the magnet field strength and design of the receiver coil, there are a number of factors to balance including the signal-to-noise ratio, image resolution, and acquisition times. We discuss strategies to minimize artefacts from movement, inhomogeneity, chemical shift, and fat suppression. As with all MRI, each strategy comes at a price, but the benefits and costs of each approach can be fine-tuned to each combination of joint, receiver coil, and MRI machine. Copyright © 2014 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  13. Homotopy optimization methods for global optimization.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; O' Leary, Dianne P. (University of Maryland, College Park, MD)

    2005-12-01

    We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.

  14. Optimal regionalization of extreme value distributions for flood estimation

    Science.gov (United States)

    Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.

    2018-01-01

    Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.

  15. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2008-01-01

    Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.

  16. Extremely Randomized Machine Learning Methods for Compound Activity Prediction.

    Science.gov (United States)

    Czarnecki, Wojciech M; Podlewska, Sabina; Bojarski, Andrzej J

    2015-11-09

    Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called 'extremely randomized methods'-Extreme Entropy Machine and Extremely Randomized Trees-for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their 'non-extreme' competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  17. Inverse-Free Extreme Learning Machine With Optimal Information Updating.

    Science.gov (United States)

    Li, Shuai; You, Zhu-Hong; Guo, Hongliang; Luo, Xin; Zhao, Zhong-Qiu

    2016-05-01

    The extreme learning machine (ELM) has drawn insensitive research attentions due to its effectiveness in solving many machine learning problems. However, the matrix inversion operation involved in the algorithm is computational prohibitive and limits the wide applications of ELM in many scenarios. To overcome this problem, in this paper, we propose an inverse-free ELM to incrementally increase the number of hidden nodes, and update the connection weights progressively and optimally. Theoretical analysis proves the monotonic decrease of the training error with the proposed updating procedure and also proves the optimality in every updating step. Extensive numerical experiments show the effectiveness and accuracy of the proposed algorithm.

  18. Analytical methods of optimization

    CERN Document Server

    Lawden, D F

    2006-01-01

    Suitable for advanced undergraduates and graduate students, this text surveys the classical theory of the calculus of variations. It takes the approach most appropriate for applications to problems of optimizing the behavior of engineering systems. Two of these problem areas have strongly influenced this presentation: the design of the control systems and the choice of rocket trajectories to be followed by terrestrial and extraterrestrial vehicles.Topics include static systems, control systems, additional constraints, the Hamilton-Jacobi equation, and the accessory optimization problem. Prereq

  19. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  20. Optimality Criterion Methods in Structural Optimization.

    Science.gov (United States)

    1982-10-01

    Project Engineer Chf , Analysis & Optimization Branch Design & Analysis Methods Group RAL L. KUSTER, 3.,gol, USAF CisStructures & [amics Division "If...IITI II * *I *e * lI I imo 31m 2 -I "’ II I I I I I OI .4 a 0 I IO O * o2 me * me ecg m m o e o omo em me m e *e 0, ~ 𔄂l 𔄂 II 01 I IO OI II ~ ’ 8 8

  1. Particle Swarm Optimization Based Selective Ensemble of Online Sequential Extreme Learning Machine

    OpenAIRE

    Yang Liu; Bo He; Diya Dong; Yue Shen,; Tianhong Yan; Rui Nian; Amaury Lendasse

    2015-01-01

    A novel particle swarm optimization based selective ensemble (PSOSEN) of online sequential extreme learning machine (OS-ELM) is proposed. It is based on the original OS-ELM with an adaptive selective ensemble framework. Two novel insights are proposed in this paper. First, a novel selective ensemble algorithm referred to as particle swarm optimization selective ensemble is proposed, noting that PSOSEN is a general selective ensemble method which is applicable to any learning algorithms, inclu...

  2. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  3. Polynomial Optimization Methods

    NARCIS (Netherlands)

    P. van Eeghen (Piet)

    2013-01-01

    htmlabstractThis thesis is an exposition of ideas and methods that help un- derstanding the problem of minimizing a polynomial over a basic closed semi-algebraic set. After the introduction of some the- ory on mathematical tools such as sums of squares, nonnegative polynomials and moment matrices,

  4. Optimization methods in structural design

    CERN Document Server

    Rothwell, Alan

    2017-01-01

    This book offers an introduction to numerical optimization methods in structural design. Employing a readily accessible and compact format, the book presents an overview of optimization methods, and equips readers to properly set up optimization problems and interpret the results. A ‘how-to-do-it’ approach is followed throughout, with less emphasis at this stage on mathematical derivations. The book features spreadsheet programs provided in Microsoft Excel, which allow readers to experience optimization ‘hands-on.’ Examples covered include truss structures, columns, beams, reinforced shell structures, stiffened panels and composite laminates. For the last three, a review of relevant analysis methods is included. Exercises, with solutions where appropriate, are also included with each chapter. The book offers a valuable resource for engineering students at the upper undergraduate and postgraduate level, as well as others in the industry and elsewhere who are new to these highly practical techniques.Whi...

  5. Population Set based Optimization Method

    Science.gov (United States)

    Manekar, Y.; Verma, H. K.

    2013-09-01

    In this paper an population set based optimization method is proposed for solving some benchmark functions and also to solve optimal power flow problem like `combined economic and emission dispatch problem (CEED)' with multiple objective functions. This algorithm has taken into consideration all the equality and inequality constraints. The improvement in system performance is based on reduction in cost of power generation and active power loss. The proposed algorithms have been compared with the other methods like GA, PSO etc reported in the literature. The results are impressive and encouraging. The study results show that the proposed method holds better solutions in CEED problems.

  6. Optimized Extreme Learning Machine for Power System Transient Stability Prediction Using Synchrophasors

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2015-01-01

    Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.

  7. Reduced basis method for source mask optimization

    CERN Document Server

    Pomplun, J; Burger, S; Schmidt, F; Tyminski, J; Flagello, D; Toshiharu, N; 10.1117/12.866101

    2010-01-01

    Image modeling and simulation are critical to extending the limits of leading edge lithography technologies used for IC making. Simultaneous source mask optimization (SMO) has become an important objective in the field of computational lithography. SMO is considered essential to extending immersion lithography beyond the 45nm node. However, SMO is computationally extremely challenging and time-consuming. The key challenges are due to run time vs. accuracy tradeoffs of the imaging models used for the computational lithography. We present a new technique to be incorporated in the SMO flow. This new approach is based on the reduced basis method (RBM) applied to the simulation of light transmission through the lithography masks. It provides a rigorous approximation to the exact lithographical problem, based on fully vectorial Maxwell's equations. Using the reduced basis method, the optimization process is divided into an offline and an online steps. In the offline step, a RBM model with variable geometrical param...

  8. Optimization of Medical Teaching Methods

    Directory of Open Access Journals (Sweden)

    Wang Fei

    2015-12-01

    Full Text Available In order to achieve the goal of medical education, medicine and adapt to changes in the way doctors work, with the rapid medical teaching methods of modern science and technology must be reformed. Based on the current status of teaching in medical colleges method to analyze the formation and development of medical teaching methods, characteristics, about how to achieve optimal medical teaching methods for medical education teachers and management workers comprehensive and thorough change teaching ideas and teaching concepts provide a theoretical basis.

  9. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    Science.gov (United States)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  10. Discretization methods for extremely anisotropic diffusion

    NARCIS (Netherlands)

    B. van Es (Bram); B. Koren (Barry); H.J. de Blank

    2013-01-01

    textabstractIn fusion plasmas there is extreme anisotropy due to the high temperature and large magnetic field strength. This causes diffusive processes, heat diffusion and energy/momentum loss due to viscous friction, to effectively be aligned with the magnetic field lines. This alignment leads

  11. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  12. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  13. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis.

    Science.gov (United States)

    Li, Qiang; Chen, Huiling; Huang, Hui; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts.

  14. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Directory of Open Access Journals (Sweden)

    Xiguang Li

    2017-01-01

    Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.

  15. Prediction Interval Construction for Byproduct Gas Flow Forecasting Using Optimized Twin Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xueying Sun

    2017-01-01

    Full Text Available Prediction of byproduct gas flow is of great significance to gas system scheduling in iron and steel plants. To quantify the associated prediction uncertainty, a two-step approach based on optimized twin extreme learning machine (ELM is proposed to construct prediction intervals (PIs. In the first step, the connection weights of the twin ELM are pretrained using a pair of symmetric weighted objective functions. In the second step, output weights of the twin ELM are further optimized by particle swarm optimization (PSO. The objective function is designed to comprehensively evaluate PIs based on their coverage probability, width, and deviation. The capability of the proposed method is validated using four benchmark datasets and two real-world byproduct gas datasets. The results demonstrate that the proposed approach constructs higher quality prediction intervals than the other three conventional methods.

  16. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Science.gov (United States)

    Li, Xiguang; Zhao, Liang; Gong, Changqing; Liu, Xiaojing

    2017-01-01

    Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA), is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM) for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent. PMID:29085425

  17. DESIGN OPTIMIZATION METHOD USED IN MECHANICAL ENGINEERING

    National Research Council Canada - National Science Library

    SCURTU Iacob Liviu; BODEA Sanda Mariana; JURCO Ancuta Nadia

    2016-01-01

    This paper presents an optimization study in mechanical engineering. First part of the research describe the structural optimization method used, followed by the presentation of several optimization studies conducted in recent years...

  18. Genetic algorithm optimization of grating coupled near-field interference lithography systems at extreme numerical apertures

    Science.gov (United States)

    Bourke, Levi; Blaikie, Richard J.

    2017-09-01

    Grating coupled near-field interference lithography has the ability to produce deep-subwavelength interference patterns. Simulations of these systems is very computationally intensive. An inverse design procedure employing a genetic algorithm is utilized here to massively reduce the computational load and allow for the design of systems capable of interfering extremely high numerical apertures. This method is used to optimize systems with an interference patterns with a half pitch of λ /40 corresponding to a numerical aperture of 20. It is also used to demonstrate interference of higher | m| diffraction orders.

  19. Aero Engine Component Fault Diagnosis Using Multi-Hidden-Layer Extreme Learning Machine with Optimized Structure

    Directory of Open Access Journals (Sweden)

    Shan Pang

    2016-01-01

    Full Text Available A new aero gas turbine engine gas path component fault diagnosis method based on multi-hidden-layer extreme learning machine with optimized structure (OM-ELM was proposed. OM-ELM employs quantum-behaved particle swarm optimization to automatically obtain the optimal network structure according to both the root mean square error on training data set and the norm of output weights. The proposed method is applied to handwritten recognition data set and a gas turbine engine diagnostic application and is compared with basic ELM, multi-hidden-layer ELM, and two state-of-the-art deep learning algorithms: deep belief network and the stacked denoising autoencoder. Results show that, with optimized network structure, OM-ELM obtains better test accuracy in both applications and is more robust to sensor noise. Meanwhile it controls the model complexity and needs far less hidden nodes than multi-hidden-layer ELM, thus saving computer memory and making it more efficient to implement. All these advantages make our method an effective and reliable tool for engine component fault diagnosis tool.

  20. Improved Extreme-Scenario Extraction Method For The Economic Dispatch Of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    ) of active distribution network with renewables. The extreme scenarios are selected from the historical data using the improved minimum volume enclosing ellipsoid (MVEE) algorithm to guarantee the security of system operation while avoid frequently switching the transformer tap. It is theoretically proved......Optimization techniques with good characterization of the uncertainties in modern power system enable the system operators well trade-off between security and sustainability. This paper proposes the extreme-scenario extraction based robust optimization method for the economic dispatch (ED...

  1. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  2. A Method of Selecting the Block Size of BMM for Estimating Extreme Loads in Engineering Vehicles

    Directory of Open Access Journals (Sweden)

    Jixin Wang

    2016-01-01

    Full Text Available Extreme loads have a significant effect on the fatigue damage of components. The block maximum method (BMM is widely used to estimate extreme values in various fields. Selecting a reasonable block size for BMM is crucial to ensure that proper extreme values are extracted to get extreme sample to estimate extreme values. Aiming at this issue, this study proposed a comprehensive evaluation approach based on multiple-criteria decision making (MCDM method to select a proper block size. A wheel loader with six sections in one operating cycle was illustrated as an example. First, spading sections of each operating cycle were extracted and connected as extreme loads often occur at that section. Then extreme sample was obtained by BMM for fitting the generalized extreme value (GEV distribution. Kolmogorov-Smirnov (K-S test, Pearson’s Chi-Square (χ2 test, and average deviation in Probability Distribution Function (PDF are selected as the fitting test. The comprehensive weights are calculated by the maximum entropy principle. Finally, the optimal block size corresponding to the minimum comprehensive evaluation indicator is obtained and the result exhibited a good fitting effect. The proposed method can also be flexibly used in various situations to select a block size.

  3. A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training.

    Science.gov (United States)

    Yang, Zhiyong; Zhang, Taohong; Zhang, Dezheng

    2016-02-01

    Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.

  4. A comparative assessment of statistical methods for extreme weather analysis

    Science.gov (United States)

    Schlögl, Matthias; Laaha, Gregor

    2017-04-01

    Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus

  5. OPTIMIZATION METHODS AND SEO TOOLS

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2014-06-01

    Full Text Available SEO is the activity of optimizing Web pages or whole sites in order to make them more search engine friendly, thus getting higher positions in search results. Search engine optimization (SEO involves designing, writing, and coding a website in a way that helps to improve the volume and quality of traffic to your website from people using search engines. While Search Engine Optimization is the focus of this booklet, keep in mind that it is one of many marketing techniques. A brief overview of other marketing techniques is provided at the end of this booklet.

  6. Methods to Optimize for Energy Efficiency

    Science.gov (United States)

    2011-05-01

    Prototype Representation & Design Exploration Methods 7 EXERGY -BASED METHODS Feature Presentation Historically: • Energy always an implicit...thermal components Exergy -Based Design Methods: Specify all vehicle design requirements as work potential ( exergy destruction, entropy...PS, ECS, and AFS-A Optimal Vehicles Predicted for Four Optimization Metrics Traditional: • Minimize Gross Takeoff Weight Exergy Methods

  7. A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    Science.gov (United States)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2014-10-01

    The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.

  8. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    Science.gov (United States)

    Li, Qiang; Zhao, Xuehua; Cai, ZhenNao; Tong, Changfei; Liu, Wenbin; Tian, Xin

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts. PMID:28246543

  9. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2017-01-01

    Full Text Available In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO and kernel extreme learning machine (KELM, termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO was used to update the current positions of population in the discrete searching space, thus getting the optimal feature subset for the better classification purpose based on KELM. The proposed approach is compared against the original GA and GWO on the two common disease diagnosis problems in terms of a set of performance metrics, including classification accuracy, sensitivity, specificity, precision, G-mean, F-measure, and the size of selected features. The simulation results have proven the superiority of the proposed method over the other two competitive counterparts.

  10. Optimal adaptation to extreme rainfalls in current and future climate

    DEFF Research Database (Denmark)

    Rosbjerg, Dan

    2017-01-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and th...... is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered........ The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate...

  11. Optimal adaptation to extreme rainfalls under climate change

    Science.gov (United States)

    Rosbjerg, Dan

    2017-04-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.

  12. Optimal adaptation to extreme rainfalls in current and future climate

    Science.gov (United States)

    Rosbjerg, Dan

    2017-01-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.

  13. A topological derivative method for topology optimization

    DEFF Research Database (Denmark)

    Norato, J.; Bendsøe, Martin P.; Haber, RB

    2007-01-01

    We propose a fictitious domain method for topology optimization in which a level set of the topological derivative field for the cost function identifies the boundary of the optimal design. We describe a fixed-point iteration scheme that implements this optimality criterion subject to a volumetric...

  14. An Enhanced Grey Wolf Optimization Based Feature Selection Wrapped Kernel Extreme Learning Machine for Medical Diagnosis

    OpenAIRE

    Qiang Li; Huiling Chen; Hui Huang; Xuehua Zhao; ZhenNao Cai; Changfei Tong; Wenbin Liu; Xin Tian

    2017-01-01

    In this study, a new predictive framework is proposed by integrating an improved grey wolf optimization (IGWO) and kernel extreme learning machine (KELM), termed as IGWO-KELM, for medical diagnosis. The proposed IGWO feature selection approach is used for the purpose of finding the optimal feature subset for medical data. In the proposed approach, genetic algorithm (GA) was firstly adopted to generate the diversified initial positions, and then grey wolf optimization (GWO) was used to update ...

  15. Biologically inspired optimization methods an introduction

    CERN Document Server

    Wahde, M

    2008-01-01

    The advent of rapid, reliable and cheap computing power over the last decades has transformed many, if not most, fields of science and engineering. The multidisciplinary field of optimization is no exception. First of all, with fast computers, researchers and engineers can apply classical optimization methods to problems of larger and larger size. In addition, however, researchers have developed a host of new optimization algorithms that operate in a rather different way than the classical ones, and that allow practitioners to attack optimization problems where the classical methods are either not applicable or simply too costly (in terms of time and other resources) to apply.This book is intended as a course book for introductory courses in stochastic optimization algorithms (in this book, the terms optimization method and optimization algorithm will be used interchangeably), and it has grown from a set of lectures notes used in courses, taught by the author, at the international master programme Complex Ada...

  16. Particle Swarm Optimization Based Selective Ensemble of Online Sequential Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available A novel particle swarm optimization based selective ensemble (PSOSEN of online sequential extreme learning machine (OS-ELM is proposed. It is based on the original OS-ELM with an adaptive selective ensemble framework. Two novel insights are proposed in this paper. First, a novel selective ensemble algorithm referred to as particle swarm optimization selective ensemble is proposed, noting that PSOSEN is a general selective ensemble method which is applicable to any learning algorithms, including batch learning and online learning. Second, an adaptive selective ensemble framework for online learning is designed to balance the accuracy and speed of the algorithm. Experiments for both regression and classification problems with UCI data sets are carried out. Comparisons between OS-ELM, simple ensemble OS-ELM (EOS-ELM, genetic algorithm based selective ensemble (GASEN of OS-ELM, and the proposed particle swarm optimization based selective ensemble of OS-ELM empirically show that the proposed algorithm achieves good generalization performance and fast learning speed.

  17. The selective dynamical downscaling method for extreme-wind atlases

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    and (iii) post-processing. The post-processing generalizes the winds from the mesoscale modelling to standard conditions, i.e. 10-m height over a homogeneous surface with roughness length of 5 cm. The generalized winds are then used to calculate the 50-year wind using the annual maximum method for each...... mesoscale grid point. The generalization of the mesoscale winds through the post-processing provides a framework for data validation and for applying further the mesoscale extreme winds at specific places using microscale modelling. The results are compared with measurements from two areas with different......A selective dynamical downscaling method is developed to obtain extreme-wind atlases for large areas. The method is general, efficient and flexible. The method consists of three steps: (i) identifying storm episodes for a particular area, (ii) downscaling of the storms using mesoscale modelling...

  18. Game theory and extremal optimization for community detection in complex dynamic networks.

    Directory of Open Access Journals (Sweden)

    Rodica Ioana Lung

    Full Text Available The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.

  19. Game theory and extremal optimization for community detection in complex dynamic networks.

    Science.gov (United States)

    Lung, Rodica Ioana; Chira, Camelia; Andreica, Anca

    2014-01-01

    The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.

  20. Tax optimization methods of international companies

    OpenAIRE

    Černá, Kateřina

    2015-01-01

    This thesis is focusing on methods of tax optimization of international companies. These international concerns are endeavoring tax minimization. The disparity of the tax systems gives to these companies a possibility of profit and tax base shifting. At first this thesis compares the differences of tax optimization, aggressive tax planning and tax evasion. Among the areas of the optimization methods, which are described in this thesis, belongs tax residention, dividends, royalty payments, tra...

  1. Closed Loop Optimal Control of a Stewart Platform Using an Optimal Feedback Linearization Method

    Directory of Open Access Journals (Sweden)

    Hami Tourajizadeh

    2016-06-01

    Full Text Available Optimal control of a Stewart robot is performed in this paper using a sequential optimal feedback linearization method considering the jack dynamics. One of the most important applications of a Stewart platform is tracking a machine along a specific path or from a defined point to another point. However, the control procedure of these robots is more challenging than that of serial robots since their dynamics are extremely complicated and non-linear. In addition, saving energy, together with achieving the desired accuracy, is one of the most desirable objectives. In this paper, a proper non-linear optimal control is employed to gain the maximum accuracy by applying the minimum force distribution to the jacks. Dynamics of the jacks are included in this paper to achieve more accurate results. Optimal control is performed for a six-DOF hexapod robot and its accuracy is increased using a sequential feedback linearization method, while its energy optimization is realized using the LQR method for the linearized system. The efficiency of the proposed optimal control is verified by simulating a six-DOF hexapod robot in MATLAB, and its related results are gained and analysed. The actual position of the end-effector, its velocity, the initial and final forces of the jacks and the length and velocity of the jacks are obtained and then compared with open loop and non-optimized systems; analytical comparisons show the efficiency of the proposed methods.

  2. Clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Il Bong; Dong, Kyung Rae [Dept. Radiological Technology, Gwangju Health University, Gwangju (Korea, Republic of); Goo, Eun Hoe [Dept. Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2016-11-15

    The purpose of this study was to assess clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans. From January to July 2016, 30 patients (mean age, 68) were studied with lower extremity CTA and lower extremity perfusion CT. 128 channel multi-detector row CT scans were acquired with a CT scanner (SOMATOM Definition Flash, Siemens medical solution, Germany) of lower extremity perfusion CT and lower extremity CTA. Acquired images were reconstructed with 3D workstation (Leonardo, Siemens, Germany). Site of lower extremity arterial occlusive and stenosis lesions were detected superficial femoral artery 36.6%, popliteal artery 23.4%, external iliac artery 16.7%, common femoral artery 13.3%, peroneal artery 10%. The mean total DLP comparison of lower extremity perfusion CT and lower extremity CTA, 650 mGy-cm and 675 mGy-cm, respectively. Lower extremity perfusion CT and lower extremity CTA were realized that were never be two examination that were exactly the same legions. Future through the development of lower extremity perfusion CT soft ware programs suggest possible clinical applications.

  3. Optimized temperature control system integrated into a micro direct methanol fuel cell for extreme environments

    Science.gov (United States)

    Zhang, Qian; Wang, Xiaohong; Zhu, Yiming; Zhou, Yan'an; Qiu, Xinping; Liu, Litian

    This paper reports a micro direct methanol fuel cell (μDMFC) integrated with a heater and a temperature sensor to realize temperature control. A thermal model for the μDMFC is set up based on heat transfer and emission mechanisms. Several patterns of the heater are designed and simulated to produce a more uniform temperature profile. The μDMFC with optimized temperature control system, which has better temperature distribution, is fabricated by using MEMS technologies, assembled with polydimethylsiloxane (PDMS) material and polymethylmethacrylate (PMMA) holders, and characterized in two methods, one with different currents applied and another with different methanol velocities. A μDMFC integrated with the heater of different pattern and another one with aluminum holders, are assembled and tested also to verify the heating effect and temperature maintaining of packaging material. This work would make it possible for a μDMFC to enhance the performance by adjusting to an optimal temperature and employ in extreme environments, such as severe winter, polar region, outer space, desert and deep sea area.

  4. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    OpenAIRE

    Fu Yu; Mu Jiong; Duan Xu Liang

    2016-01-01

    By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research...

  5. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier

    Directory of Open Access Journals (Sweden)

    C. V. Subbulakshmi

    2015-01-01

    Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.

  6. A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.

    Science.gov (United States)

    Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong

    2017-01-01

    It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier.

    Science.gov (United States)

    Subbulakshmi, C V; Deepa, S N

    2015-01-01

    Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO) algorithm with the extreme learning machine (ELM) classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN), proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.

  8. The optimal homotopy asymptotic method engineering applications

    CERN Document Server

    Marinca, Vasile

    2015-01-01

    This book emphasizes in detail the applicability of the Optimal Homotopy Asymptotic Method to various engineering problems. It is a continuation of the book “Nonlinear Dynamical Systems in Engineering: Some Approximate Approaches”, published at Springer in 2011, and it contains a great amount of practical models from various fields of engineering such as classical and fluid mechanics, thermodynamics, nonlinear oscillations, electrical machines, and so on. The main structure of the book consists of 5 chapters. The first chapter is introductory while the second chapter is devoted to a short history of the development of homotopy methods, including the basic ideas of the Optimal Homotopy Asymptotic Method. The last three chapters, from Chapter 3 to Chapter 5, are introducing three distinct alternatives of the Optimal Homotopy Asymptotic Method with illustrative applications to nonlinear dynamical systems. The third chapter deals with the first alternative of our approach with two iterations. Five application...

  9. A method optimization study for atomic absorption ...

    African Journals Online (AJOL)

    A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to optimize the procedures for the existing methods. Spectrograms of both standard and sample solutions of zinc were recorded by measuring ...

  10. Optimizing How We Teach Research Methods

    Science.gov (United States)

    Cvancara, Kristen E.

    2017-01-01

    Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…

  11. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  12. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see...... the well known Reuss lower bound. [1] Bendsøe, M.P.; Sigmund, O. 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, H. K.; W. Malalasekera 1995: An introduction to Computational Fluid Dynamics: the Finite Volume Method. London: Longman......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...

  13. Extreme learning machine based optimal embedding location finder for image steganography.

    Directory of Open Access Journals (Sweden)

    Hayfaa Abdulzahra Atee

    Full Text Available In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM index, fusion matrices, and mean square error (MSE. The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.

  14. Extreme learning machine based optimal embedding location finder for image steganography.

    Science.gov (United States)

    Atee, Hayfaa Abdulzahra; Ahmad, Robiah; Noor, Norliza Mohd; Rahma, Abdul Monem S; Aljeroudi, Yazan

    2017-01-01

    In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM) algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM) index, fusion matrices, and mean square error (MSE). The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.

  15. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  16. State space Newton's method for topology optimization

    DEFF Research Database (Denmark)

    Evgrafov, Anton

    2014-01-01

    We introduce a new algorithm for solving certain classes of topology optimization problems, which enjoys fast local convergence normally achieved by the full space methods while working in a smaller reduced space. The computational complexity of Newton’s direction finding subproblem in the algori......We introduce a new algorithm for solving certain classes of topology optimization problems, which enjoys fast local convergence normally achieved by the full space methods while working in a smaller reduced space. The computational complexity of Newton’s direction finding subproblem...

  17. Optimal boarding method for airline passengers

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab

    2008-02-01

    Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method and discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.

  18. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  19. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole

    Computational procedures for topology optimization of continuum problems using a material distribution method are typically based on the application of the finite element method (FEM) (see, e.g. [1]). In the present work we study a computational framework based on the finite volume method (FVM, see......, e.g. [2]) in order to develop methods for topology design for applications where conservation laws are critical such that element--wise conservation in the discretized models has a high priority. This encompasses problems involving for example mass and heat transport. The work described...... the arithmetic and harmonic average with the latter being the well known Reuss lower bound. [1] Bendsøe, MP and Sigmund, O 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, HK and Malalasekera, W 1995: An introduction to Computational Fluid Dynamics...

  20. On data-based analysis of extreme events in multidimensional non-stationary meteorological systems: Based on advanced time series analysis methods and general extreme value theory

    Science.gov (United States)

    Kaiser, O.; Horenko, I.

    2012-04-01

    Given an observed series of extreme events we are interested to capture the significant trend in the underlying dynamics. Since the character of such systems is strongly non-linear and non-stationary, the detection of significant characteristics and their attribution is a complex task. A standard tool in statistics to describe the probability distribution of extreme events is the General Extreme Value Theory (GEV). While the univariate stationary GEV distribution is well studied and results in fitting the data to the model parameters using Likelihood Techniques and Bayesian Methods (Coles,'01; Davison, Rames, '00 ), analysis of non-stationary extremes is based on the a priori assumption about the trend behavior (e.g linear combination of external factors/polynomials (Coles,'01)). Additionally, analysis of multivariate, non-stationary extreme events remains still a strong challenge, since analysis without strong a priori assumptions is limited to low dimensional cases (Nychka, Cooley,'09). We introduce FEM-GEV approach, which is based on GEV and advanced Finite Element time series analysis Methods (FEM) (Horenko,'10-11). The main idea of the FEM framework is to interpolate adaptively the corresponding non-stationary model parameters by a linear convex combination of K local stationary models and a switching process between them. To apply FEM framework to a time series of extremes we extend FEM by defining the model parameters wrt GEV distribution, as external factors we consider global atmospheric patterns. The optimal number of local models K and the best combination of external factors is estimated using Akaike Information Criteria. FEM-GEV approach allows to study the non-stationary dynamics of GEV parameters without a priori assumptions on the trend behavior and also captures the non-linear, non-stationary dependence on external factors. The series of extremes has by definition no connection to real time scale, for this reason the results of FEM-GEV can be only

  1. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  2. Optimization methods for finding minimum energy paths

    Science.gov (United States)

    Sheppard, Daniel; Terrell, Rye; Henkelman, Graeme

    2008-04-01

    A comparison of chain-of-states based methods for finding minimum energy pathways (MEPs) is presented. In each method, a set of images along an initial pathway between two local minima is relaxed to find a MEP. We compare the nudged elastic band (NEB), doubly nudged elastic band, string, and simplified string methods, each with a set of commonly used optimizers. Our results show that the NEB and string methods are essentially equivalent and the most efficient methods for finding MEPs when coupled with a suitable optimizer. The most efficient optimizer was found to be a form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno method in which the approximate inverse Hessian is constructed globally for all images along the path. The use of a climbing-image allows for finding the saddle point while representing the MEP with as few images as possible. If a highly accurate MEP is desired, it is found to be more efficient to descend from the saddle to the minima than to use a chain-of-states method with many images. Our results are based on a pairwise Morse potential to model rearrangements of a heptamer island on Pt(111), and plane-wave based density functional theory to model a rollover diffusion mechanism of a Pd tetramer on MgO(100) and dissociative adsorption and diffusion of oxygen on Au(111).

  3. A Gradient Taguchi Method for Engineering Optimization

    Science.gov (United States)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  4. Dead time optimization method for power converter

    Science.gov (United States)

    Deselaers, C.; Bergmann, U.; Gronwald, F.

    2013-07-01

    This paper introduces a method for dead time optimization in variable speed motor drive systems. The aim of this method is to reduce the conduction time of the freewheeling diode to a minimum without generation of cross conduction. This results in lower losses, improved EMC, and less overshooting of the phase voltage. The principle of the method is to detect beginning cross currents without adding additional components in the half bridge like resistors or inductances. Only the wave shape of the phase voltage needs to be monitored during switching. This is illustrated by an application of the method to a real power converter.

  5. Fast numerical methods for robust optimal design

    Science.gov (United States)

    Xiu, Dongbin

    2008-06-01

    A fast numerical approach for robust design optimization is presented. The core of the method is based on the state-of-the-art fast numerical methods for stochastic computations with parametric uncertainty. These methods employ generalized polynomial chaos (gPC) as a high-order representation for random quantities and a stochastic Galerkin (SG) or stochastic collocation (SC) approach to transform the original stochastic governing equations to a set of deterministic equations. The gPC-based SG and SC algorithms are able to produce highly accurate stochastic solutions with (much) reduced computational cost. It is demonstrated that they can serve as efficient forward problem solvers in robust design problems. Possible alternative definitions for robustness are also discussed. Traditional robust optimization seeks to minimize the variance (or standard deviation) of the response function while optimizing its mean. It can be shown that although variance can be used as a measure of uncertainty, it is a weak measure and may not fully reflect the output variability. Subsequently a strong measure in terms of the sensitivity derivatives of the response function is proposed as an alternative robust optimization definition. Numerical examples are provided to demonstrate the efficiency of the gPC-based algorithms, in both the traditional weak measure and the newly proposed strong measure.

  6. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  7. A new computer method for temperature measurement based on an optimal control problem

    NARCIS (Netherlands)

    Damean, N.; Houkes, Z.; Regtien, Paulus P.L.

    1996-01-01

    A new computer method to measure extreme temperatures is presented. The method reduces the measurement of the unknown temperature to the solving of an optimal control problem, using a numerical computer. Based on this method, a new device for temperature measurement is built. It consists of a

  8. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  9. A Photometric Method for Discovering Extremely Metal Poor Stars

    Science.gov (United States)

    Miller, Adam

    2015-01-01

    I present a new non-parametric machine-learning method for predicting stellar metallicity ([Fe/H]) based on photometric colors from the Sloan Digital Sky Survey (SDSS). The method is trained using a large sample of ~150k stars with SDSS spectra and atmospheric parameter estimates (Teff, log g, and [Fe/H]) from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g 2, corresponding to the stars for which the SSPP estimates are most reliable, the method is capable of predicting [Fe/H] with a typical scatter of ~0.16 dex. This scatter is smaller than the typical uncertainty associated with [Fe/H] measurements from a low-resolution spectrum. The method is suitable for the discovery of extremely metal poor (EMP) stars ([Fe/H] 50%), but low efficiency (E ~ 10%), samples of EMP star candidates can be generated from the sources with the lowest predicted [Fe/H]. To improve the efficiency of EMP star discovery, an alternative machine-learning model is constructed where the number of non-EMP stars is down-sampled in the training set, and a new regression model is fit. This alternate model improves the efficiency of EMP candidate selection by a factor of ~2. To test the efficacy of the model, I have obtained low-resolution spectra of 56 candidate EMP stars. I measure [Fe/H] for these stars using the well calibrated Ca II K line method, and compare our spectroscopic measurements to those from the machine learning model. Once applied to wide-field surveys, such as SDSS, Pan-STARRS, and LSST, the model will identify thousands of previously unknown EMP stars.

  10. Extremity exams optimization for computed radiography; Otimizacao de exames de extremidade para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana Luiza M.; Alves, Allan Felipe F.; Velo, Alexandre F.; Miranda, Jose Ricardo A., E-mail: analuiza@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2013-08-15

    The computed radiography (CR) has become the most used device for image acquisition, since its introduction in the 80s. The detection and early diagnosis, obtained through CR examinations, are important for the successful treatment of diseases of the hand. However, the norms used for optimization of these images are based on international protocols. Therefore, it is necessary to determine letters of radiographic techniques for CR system, which provides a safe medical diagnosis, with doses as low as reasonably achievable. The objective of this work is to develop an extremity homogeneous phantom to be used in the calibration process of radiographic techniques. In the construction process of the simulator, it has been developed a tissues' algorithm quantifier using Matlab®. In this process the average thickness was quantified from bone and soft tissues in the region of the hand of an anthropomorphic simulator as well as the simulators' material thickness corresponding (aluminum and Lucite) using technique of mask application and removal Gaussian histogram corresponding to tissues of interest. The homogeneous phantom was used to calibrate the x-ray beam. The techniques were implemented in a calibrated hand anthropomorphic phantom. The images were evaluated by specialists in radiology by the method of VGA. Skin entrance surface doses were estimated (SED) corresponding to each technique obtained with their respective tube charge. The thicknesses of simulators materials that constitute the homogeneous phantom determined in this study were 19.01 mm of acrylic and 0.81 mm of aluminum. A better picture quality with doses as low as reasonably achievable decreased dose and tube charge around 53.35% and 37.78% respectively, compared normally used by radiology diagnostic routine clinical of HCFMB-UNESP. (author)

  11. Layout optimization with algebraic multigrid methods

    Science.gov (United States)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  12. Energy methods for hypersonic trajectory optimization

    Science.gov (United States)

    Chou, Han-Chang

    A family of near-optimal guidance laws for the ascent and descent trajectories between earth surface and earth orbit of fully reusable single-stage-to-orbit launch vehicles is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. The method is based on selecting propulsion system modes and flight-path parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. For ascent trajectories of vehicles employing hydrogen fuel, because the density of liquid hydrogen is relatively low, the sensitivity to perturbations in volume needs to be taken into consideration as well as weight sensitivity. The cost functional is then a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit. Both airbreathing/rocket and all rocket propulsion systems are considered. For airbreathing/rocket vehicles, the optimal propulsion switching Mach numbers are determined and the use of liquid oxygen augmentation is investigated. For the vehicles with all rocket power, the desirability of tripropellant systems is investigated. In addition, time and heat load is minimized as well. For descent trajectories, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated, as well as minimum time and minimum temperature paths. The results show that the optimization methodology can be used to derive a wide variety of near-optimal launch vehicle trajectories.

  13. Computational Methods for Design, Control and Optimization

    Science.gov (United States)

    2007-10-01

    Sensitivity Computations, 49 (2005), pp. 1889 - 1903.. 8. Y. Cao, T. L. Herdman and Y. Xu, A Hybrid Collocation Method for Volterra Integral Equations ...scaleRiccati equations that arise in a variety of control and estimation problems. The results imply that, even when the Riccati equations are used for...and optimization of hybrid systems governed by partial differential equations that are typical in aerospace systems. The focus of the research is on non

  14. Layout optimization using the homogenization method

    Science.gov (United States)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  15. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  16. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  17. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  18. Swarm Optimization Methods in Microwave Imaging

    Directory of Open Access Journals (Sweden)

    Andrea Randazzo

    2012-01-01

    Full Text Available Swarm intelligence denotes a class of new stochastic algorithms inspired by the collective social behavior of natural entities (e.g., birds, ants, etc.. Such approaches have been proven to be quite effective in several applicative fields, ranging from intelligent routing to image processing. In the last years, they have also been successfully applied in electromagnetics, especially for antenna synthesis, component design, and microwave imaging. In this paper, the application of swarm optimization methods to microwave imaging is discussed, and some recent imaging approaches based on such methods are critically reviewed.

  19. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  20. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    Free Material Optimization (FMO) is a powerful approach for structural optimization in which the design parametrization allows the entire elastic stiffness tensor to vary freely at each point of the design domain. The only requirement imposed on the stiffness tensor lies on its mild necessary...... conditions for physical attainability, in the context that, it has to be symmetric and positive semidefinite. FMO problems have been studied for the last two decades in many articles that led to the development of a wide range of models, methods, and theories. As the design variables in FMO are the local...... of the formulations in most of the studies is indeed limited to FMO models for two- and three-dimensional structures. To the best of our knowledge, such models are not proposed for general laminated shell structures which nowadays have extensive industrial applications. This thesis has two main goals. The first goal...

  1. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.

  2. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent...... in a decentralised distributed system. This requires extensive communication between neighbouring nodes. A layered multi-agent system is introduced to provide a low latency communication based on a software-bus system in order to efficiently solve optimisation problems....... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual...... can be described as a transportation problem. As a basis for usage in energy management systems, methods and scenarios for solving non-linear transportation problems in multi-agent systems are introduced and evaluated. On this premise a method is presented to solve a generation units dispatching...

  3. An Efficient Method for Traffic Sign Recognition Based on Extreme Learning Machine.

    Science.gov (United States)

    Huang, Zhiyong; Yu, Yuanlong; Gu, Jason; Liu, Huaping

    2017-04-01

    This paper proposes a computationally efficient method for traffic sign recognition (TSR). This proposed method consists of two modules: 1) extraction of histogram of oriented gradient variant (HOGv) feature and 2) a single classifier trained by extreme learning machine (ELM) algorithm. The presented HOGv feature keeps a good balance between redundancy and local details such that it can represent distinctive shapes better. The classifier is a single-hidden-layer feedforward network. Based on ELM algorithm, the connection between input and hidden layers realizes the random feature mapping while only the weights between hidden and output layers are trained. As a result, layer-by-layer tuning is not required. Meanwhile, the norm of output weights is included in the cost function. Therefore, the ELM-based classifier can achieve an optimal and generalized solution for multiclass TSR. Furthermore, it can balance the recognition accuracy and computational cost. Three datasets, including the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) dataset, are used to evaluate this proposed method. Experimental results have shown that this proposed method obtains not only high recognition accuracy but also extremely high computational efficiency in both training and recognition processes in these three datasets.

  4. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  5. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  6. Hybrid intelligent optimization methods for engineering problems

    Science.gov (United States)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and

  7. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  8. Computational methods applied to wind tunnel optimization

    Science.gov (United States)

    Lindsay, David

    This report describes computational methods developed for optimizing the nozzle of a three-dimensional subsonic wind tunnel. This requires determination of a shape that delivers flow to the test section, typically with a speed increase of 7 or more and a velocity uniformity of .25% or better, in a compact length without introducing boundary layer separation. The need for high precision, smooth solutions, and three-dimensional modeling required the development of special computational techniques. These include: (1) alternative formulations to Neumann and Dirichlet boundary conditions, to deal with overspecified, ill-posed, or cyclic problems, and to reduce the discrepancy between numerical solutions and boundary conditions; (2) modification of the Finite Element Method to obtain solutions with numerically exact conservation properties; (3) a Matlab implementation of general degree Finite Element solvers for various element designs in two and three dimensions, exploiting vector indexing to obtain optimal efficiency; (4) derivation of optimal quadrature formulas for integration over simplexes in two and three dimensions, and development of a program for semi-automated generation of formulas for any degree and dimension; (5) a modification of a two-dimensional boundary layer formulation to provide accurate flow conservation in three dimensions, and modification of the algorithm to improve stability; (6) development of multi-dimensional spline functions to achieve smoother solutions in three dimensions by post-processing, new three-dimensional elements for C1 basis functions, and a program to assist in the design of elements with higher continuity; and (7) a development of ellipsoidal harmonics and Lame's equation, with generalization to any dimension and a demonstration that Cartesian, cylindrical, spherical, spheroidal, and sphero-conical harmonics are all limiting cases. The report includes a description of the Finite Difference, Finite Volume, and domain remapping

  9. Circular SAR Optimization Imaging Method of Buildings

    Directory of Open Access Journals (Sweden)

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  10. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  11. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  12. Special finite-difference methods for extremely anisotropic diffusion

    NARCIS (Netherlands)

    B. van Es (Bram); B. Koren (Barry); H.J. de Blank

    2012-01-01

    textabstractIn fusion plasmas there is extreme anisotropy due to the high temperature and large magnetic field strength. This causes diffusive processes, heat diffusion and energy/momentum loss due to viscous friction, to effectively be aligned with the magnetic field lines. This alignment leads

  13. Quality control methods in accelerometer data processing: identifying extreme counts.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.

  14. Annual Rainfall Maxima: Large-Deviation Alternative to Extreme-Value and Extreme-Excess Methods

    Science.gov (United States)

    Veneziano, D.; Langousis, A.; Lepore, C.

    2009-04-01

    Contrary to common belief, Gumbel's extreme value (EV) and Pickands' extreme excess (EE) theories do not generally apply to rainfall maxima at the annual level. This is true not just for long averaging durations d, as one would expect, but also in the high-resolution limit as d → 0. We reach these conclusions by studying the annual maxima of scale-invariant rainfall models with a multiplicative structure. We find that for d → 0 the annual maximum rainfall intensity in d, Iyear(d), has a generalized extreme value (GEV) distribution with a shape parameter k that is significantly higher than that predicted by Gumbel's theory and is always in the EV2 range. Under the same conditions, the excess above levels close to the annual maximum has generalized Pareto (GP) distribution with a parameter k that is always higher than that predicted by Pickands' theory. The proper tool to obtain these results is large deviation (LD) theory, a branch of probability that has been largely ignored in stochastic hydrology. In the classic EV and EE settings one considers a single random variable X and studies either the distribution of the maximum of n independent copies of X as n →ž or the distribution of the excess Xu = (X - u|X ≥ u) as the threshold u →ž. A well known result is that, if under renormalization these distributions approach non-degenerate limits, then the distribution of the maximum is GEV(k), the distribution of the excess above u is GP(k), and the common shape parameter k depends on the tail behavior of X. When applied to rainfall extremes, X is typically taken to be I(d), the rainfall intensity in a generic d interval. The problem with the EV approach is that the number of d intervals in one year, n(d) = 1yr•d, may be too small for convergence of Iyear(d) to the asymptotic GEV distribution. Likewise, in the EE approach, thresholds u on the order of the annual maximum may be too low for convergence of the excess to the asymptotic GP

  15. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    This thesis investigates new optimization methods for structural topology optimization problems. The aim of topology optimization is finding the optimal design of a structure. The physical problem is modelled as a nonlinear optimization problem. This powerful tool was initially developed for mech......This thesis investigates new optimization methods for structural topology optimization problems. The aim of topology optimization is finding the optimal design of a structure. The physical problem is modelled as a nonlinear optimization problem. This powerful tool was initially developed...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  16. On Best Practice Optimization Methods in R

    Directory of Open Access Journals (Sweden)

    John C. Nash

    2014-09-01

    Full Text Available R (R Core Team 2014 provides a powerful and flexible system for statistical computations. It has a default-install set of functionality that can be expanded by the use of several thousand add-in packages as well as user-written scripts. While R is itself a programming language, it has proven relatively easy to incorporate programs in other languages, particularly Fortran and C. Success, however, can lead to its own costs: • Users face a confusion of choice when trying to select packages in approaching a problem. • A need to maintain workable examples using early methods may mean some tools offered as a default may be dated. • In an open-source project like R, how to decide what tools offer "best practice" choices, and how to implement such a policy, present a serious challenge. We discuss these issues with reference to the tools in R for nonlinear parameter estimation (NLPE and optimization, though for the present article `optimization` will be limited to function minimization of essentially smooth functions with at most bounds constraints on the parameters. We will abbreviate this class of problems as NLPE. We believe that the concepts proposed are transferable to other classes of problems seen by R users.

  17. Method for optimizing harvesting of crops

    DEFF Research Database (Denmark)

    2010-01-01

    In order e.g. to optimize harvesting crops of the kind which may be self dried on a field prior to a harvesting step (116, 118), there is disclosed a method of providing a mobile unit (102) for working (114, 116, 118) the field with crops, equipping the mobile unit (102) with crop biomass measuring...... means (108) and with crop moisture content measurement means (106), measuring crop biomass (107a, 107b) and crop moisture content (109a, 109b) of the crop, providing a spatial crop biomass and crop moisture content characteristics map of the field based on the biomass data (107a, 107b) provided from...... moving the mobile unit on the field and the moisture content (109a, 109b), and determining an optimised drying time (104a, 104b) prior to the following harvesting step (116, 118) in response to the spatial crop biomass and crop moisture content characteristics map and in response to a weather forecast...

  18. HRSG design method optimizes power plant efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, V. (ABCO (US))

    1991-05-01

    Heat recovery steam generators (HRSGs) are widely used in cogeneration and combined-cycle power plants. simulating the performance of the HRSG system at design and off-design conditions helps the designer optimize the overall plant efficiency. It also helps in the selection of major auxiliary equipment. Conventional simulation of HRSG design and off-design performance is a tedious task, since there are several variables involved. However, with the simplified approach presented in this article, the engineer can acquire information on the performance of the HRSG without actually doing the mechanical design. The engineer does not need to size the tubes or determine the fin configuration. This paper reports that the method also can be used for heat balance studies and in the preparation of the HRSG specification.

  19. Adaptive extraction method for trend term of machinery signal based on extreme-point symmetric mode decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Yong; Jiang, Wan-lu; Kong, Xiang-dong [Yanshan University, Hebei (China)

    2017-02-15

    In mechanical fault diagnosis and condition monitoring, extracting and eliminating the trend term of machinery signal are necessary. In this paper, an adaptive extraction method for trend term of machinery signal based on Extreme-point symmetric mode decomposition (ESMD) was proposed. This method fully utilized ESMD, including the self-adaptive decomposition feature and optimal fitting strategy. The effectiveness and practicability of this method are tested through simulation analysis and measured data validation. Results indicate that this method can adaptively extract various trend terms hidden in machinery signal, and has commendable self-adaptability. Moreover, the extraction results are better than those of empirical mode decomposition.

  20. A note on: A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    Science.gov (United States)

    Trunfio, Roberto

    2015-06-01

    In a recent article, Guo, Cheng and Wang proposed a randomized search algorithm, called modified generalized extremal optimization (MGEO), to solve the quay crane scheduling problem for container groups under the assumption that schedules are unidirectional. The authors claim that the proposed algorithm is capable of finding new best solutions with respect to a well-known set of benchmark instances taken from the literature. However, as shown in this note, there are some errors in their work that can be detected by analysing the Gantt charts of two solutions provided by MGEO. In addition, some comments on the method used to evaluate the schedule corresponding to a task-to-quay crane assignment and on the search scheme of the proposed algorithm are provided. Finally, to assess the effectiveness of the proposed algorithm, the computational experiments are repeated and additional computational experiments are provided.

  1. Integrating Software-Architecture-Centric Methods into Extreme Programming (XP)

    National Research Council Canada - National Science Library

    Nord, Robert L; Tomayko, James E; Wojcik, Rob

    2004-01-01

    ...). These methods include the Architecture Tradeoff Analysis Method (Registered Tradename), the SEI Quality Attribute Workshop, the SE Attribute-Driven Design method, the SE Cost Benefit Analysis Method, and SEI Active Reviews for Intermediate Design...

  2. Optimization of Binder Jetting Using Taguchi Method

    Science.gov (United States)

    Shrestha, Sanjay; Manogharan, Guha

    2017-03-01

    Among several additive manufacturing (AM) methods, binder-jetting has undergone a recent advancement in its ability to process metal powders through selective deposition of binders on a powder bed followed by curing, sintering, and infiltration. This study analyzes the impact of various process parameters in binder jetting on mechanical properties of sintered AM metal parts. The Taguchi optimization method has been employed to determine the optimum AM parameters to improve transverse rupture strength (TRS), specifically: binder saturation, layer thickness, roll speed, and feed-to-powder ratio. The effects of the selected process parameters on the TRS performance of sintered SS 316L samples are studied with the American Society of Testing Materials (ASTM) standard test method. It was found that binder saturation and feed-to-powder ratio were the most critical parameters, which reflects the strong influence of binder powder interaction and density of powder bed on resulting mechanical properties. This article serves as an aid in understanding the optimum process parameters for binder jetting of SS 316L.

  3. Normalized modularity optimization method for community identification with degree adjustment.

    Science.gov (United States)

    Zhang, Shuqin; Zhao, Hongyu

    2013-11-01

    As a fundamental problem in network study, community identification has attracted much attention from different fields. Representing a seminal work in this area, the modularity optimization method has been widely applied and studied. However, this method has issues in resolution limit and extreme degeneracy and may not perform well for networks with unbalanced structures. Although several methods have been proposed to overcome these limitations, they are all based on the original idea of defining modularity through comparing the total number of edges within the putative communities in the observed network with that in an equivalent randomly generated network. In this paper, we show that this modularity definition is not suitable to analyze some networks such as those with unbalanced structures. Instead, we propose to define modularity through the average degree within the communities and formulate modularity as comparing the sum of average degree within communities of the observed network to that of an equivalent randomly generated network. In addition, we also propose a degree-adjusted approach for further improvement when there are unbalanced structures. We analyze the theoretical properties of our degree adjusted method. Numerical experiments for both artificial networks and real networks demonstrate that average degree plays an important role in network community identification, and our proposed methods have better performance than existing ones.

  4. Optimization of the Helmintex method for schistosomiasis diagnosis.

    Science.gov (United States)

    Favero, Vivian; Frasca Candido, Renata Russo; De Marco Verissimo, Carolina; Jones, Malcolm K; St Pierre, Timothy G; Lindholz, Catieli Gobetti; Da Silva, Vinicius Duval; Morassutti, Alessandra Loureiro; Graeff-Teixeira, Carlos

    2017-06-01

    A diagnostic test that is reliable, sensitive, and applicable in the field is extremely important in epidemiological surveys, during medical treatment for schistosomiasis, and for the control and elimination of schistosomiasis. The Helmintex (HTX) method is based on the use of magnetic beads to trap eggs in a magnetic field. This technique is highly sensitive, but the screening of fecal samples consumes lots of time, thus delaying the results, especially in field studies. The objective of this work was to determine the effects of incorporation of the detergent Tween-20 into the method in an attempt to decrease the final pellet volume produced by the HTX method as well as the use of ninhydrin to stain the Schistosoma mansoni eggs. We showed that these modifications reduced the final volume of the fecal sediment produced in the last step of the HTX method by up to 69% and decreased the screening time to an average of 10.1 min per sample. The use of Tween 20 and ninhydrin led to a high percentage of egg recovery (27.2%). The data obtained herein demonstrate that the addition of detergent and the use of ninhydrin to the HTX process can optimize the screening step and also improve egg recovery, thus justifying the insertion of these steps into the HTX method. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Inter-comparison of statistical downscaling methods for projection of extreme flow indices across Europe

    DEFF Research Database (Denmark)

    Hundecha, Yeshewatesfa; Sunyer Pinya, Maria Antonia; Lawrence, Deborah

    2016-01-01

    the flooding is mainly caused by spring/summer snowmelt, the downscaling methods project a decrease in the extreme flows in three of the four catchments considered. A major portion of the variability in the projected changes in the extreme flow indices is attributable to the variability of the climate model...... catchments to simulate daily runoff. A set of flood indices were derived from daily flows and their changes have been evaluated by comparing their values derived from simulations corresponding to the current and future climate. Most of the implemented downscaling methods project an increase in the extreme...... flow indices in most of the catchments. The catchments where the extremes are expected to increase have a rainfall-dominated flood regime. In these catchments, the downscaling methods also project an increase in the extreme precipitation in the seasons when the extreme flows occur. In catchments where...

  6. Illumination correction of dyed fabrics approach using Bagging-based ensemble particle swarm optimization-extreme learning machine

    Science.gov (United States)

    Zhou, Zhiyu; Xu, Rui; Wu, Dichong; Zhu, Zefei; Wang, Haiyan

    2016-09-01

    Changes in illumination will result in serious color difference evaluation errors during the dyeing process. A Bagging-based ensemble extreme learning machine (ELM) mechanism hybridized with particle swarm optimization (PSO), namely Bagging-PSO-ELM, is proposed to develop an accurate illumination correction model for dyed fabrics. The model adopts PSO algorithm to optimize the input weights and hidden biases for the ELM neural network called PSO-ELM, which enhances the performance of ELM. Meanwhile, to further increase the prediction accuracy, a Bagging ensemble scheme is used to construct an independent PSO-ELM learning machine by taking bootstrap replicates of the training set. Then, the obtained multiple different PSO-ELM learners are aggregated to establish the prediction model. The proposed prediction model is evaluated with real dyed fabric images and discussed in comparison with several related methods. Experimental results show that the ensemble color constancy method is able to generate a more robust illuminant estimation model with better generalization performance.

  7. Numerical methods and optimization a consumer guide

    CERN Document Server

    Walter, Éric

    2014-01-01

    Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. Numerical Methods and Optimization – A Consumer Guide presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to ·         discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; ·         understand the principles behind recognized algorithms used in state-of-the-art numerical software; ·         learn the advantag...

  8. Extremely Efficient Design of Organic Thin Film Solar Cells via Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Mine Kaya

    2017-11-01

    Full Text Available Design of efficient thin film photovoltaic (PV cells require optical power absorption to be computed inside a nano-scale structure of photovoltaics, dielectric and plasmonic materials. Calculating power absorption requires Maxwell’s electromagnetic equations which are solved using numerical methods, such as finite difference time domain (FDTD. The computational cost of thin film PV cell design and optimization is therefore cumbersome, due to successive FDTD simulations. This cost can be reduced using a surrogate-based optimization procedure. In this study, we deploy neural networks (NNs to model optical absorption in organic PV structures. We use the corresponding surrogate-based optimization procedure to maximize light trapping inside thin film organic cells infused with metallic particles. Metallic particles are known to induce plasmonic effects at the metal–semiconductor interface, thus increasing absorption. However, a rigorous design procedure is required to achieve the best performance within known design guidelines. As a result of using NNs to model thin film solar absorption, the required time to complete optimization is decreased by more than five times. The obtained NN model is found to be very reliable. The optimization procedure results in absorption enhancement greater than 200%. Furthermore, we demonstrate that once a reliable surrogate model such as the developed NN is available, it can be used for alternative analyses on the proposed design, such as uncertainty analysis (e.g., fabrication error.

  9. Structural Optimization Design of Horizontal-Axis Wind Turbine Blades Using a Particle Swarm Optimization Algorithm and Finite Element Method

    Directory of Open Access Journals (Sweden)

    Pan Pan

    2012-11-01

    Full Text Available This paper presents an optimization method for the structural design of horizontal-axis wind turbine (HAWT blades based on the particle swarm optimization algorithm (PSO combined with the finite element method (FEM. The main goal is to create an optimization tool and to demonstrate the potential improvements that could be brought to the structural design of HAWT blades. A multi-criteria constrained optimization design model pursued with respect to minimum mass of the blade is developed. The number and the location of layers in the spar cap and the positions of the shear webs are employed as the design variables, while the strain limit, blade/tower clearance limit and vibration limit are taken into account as the constraint conditions. The optimization of the design of a commercial 1.5 MW HAWT blade is carried out by combining the above method and design model under ultimate (extreme flap-wise load conditions. The optimization results are described and compared with the original design. It shows that the method used in this study is efficient and produces improved designs.

  10. OP-Triplet-ELM: Identification of real and pseudo microRNA precursors using extreme learning machine with optimal features.

    Science.gov (United States)

    Pian, Cong; Zhang, Jin; Chen, Yuan-Yuan; Chen, Zhi; Li, Qin; Li, Qiang; Zhang, Liang-Yun

    2016-02-01

    MicroRNAs (miRNAs) are a set of short (21-24 nt) non-coding RNAs that play significant regulatory roles in the cells. Triplet-SVM-classifier and MiPred (random forest, RF) can identify the real pre-miRNAs from other hairpin sequences with similar stem-loop (pseudo pre-miRNAs). However, the 32-dimensional local contiguous structure-sequence can induce a great information redundancy. Therefore, it is essential to develop a method to reduce the dimension of feature space. In this paper, we propose optimal features of local contiguous structure-sequences (OP-Triplet). These features can avoid the information redundancy effectively and decrease the dimension of the feature vector from 32 to 8. Meanwhile, a hybrid feature can be formed by combining minimum free energy (MFE) and structural diversity. We also introduce a neural network algorithm called extreme learning machine (ELM). The results show that the specificity ([Formula: see text])and sensitivity ([Formula: see text]) of our method are 92.4% and 91.0%, respectively. Compared with Triplet-SVM-classifier, the total accuracy (ACC) of our ELM method increases by 5%. Compared with MiPred (RF) and miRANN, the total accuracy (ACC) of our ELM method increases nearly by 2%. What is more, our method commendably reduces the dimension of the feature space and the training time.

  11. Method for the protection of extreme ultraviolet lithography optics

    Science.gov (United States)

    Grunow, Philip A.; Clift, Wayne M.; Klebanoff, Leonard E.

    2010-06-22

    A coating for the protection of optical surfaces exposed to a high energy erosive plasma. A gas that can be decomposed by the high energy plasma, such as the xenon plasma used for extreme ultraviolet lithography (EUVL), is injected into the EUVL machine. The decomposition products coat the optical surfaces with a protective coating maintained at less than about 100 .ANG. thick by periodic injections of the gas. Gases that can be used include hydrocarbon gases, particularly methane, PH.sub.3 and H.sub.2S. The use of PH.sub.3 and H.sub.2S is particularly advantageous since films of the plasma-induced decomposition products S and P cannot grow to greater than 10 .ANG. thick in a vacuum atmosphere such as found in an EUVL machine.

  12. Method for optimizing harvesting of crops

    DEFF Research Database (Denmark)

    2008-01-01

      In order e.g. to optimize harvesting crops of the kind which may be self dried on a field prior to a harvesting step (116, 118), there is disclosed a method of providing a mobile unit (102) for working (114, 116, 118) the field with crops, equipping the mobile unit (102) with crop biomass...... measuring means (108) and with crop moisture content measurement means (106), measuring crop biomass (107a, 107b) and crop moisture content (109a, 109b) of the crop, providing a spatial crop biomass and crop moisture content characteristics map of the field  based on the biomass data (107a, 107b) provided...... from moving the mobile unit on the field and the moisture content (109a, 109b), and determining an optimised drying time (104a, 104b) prior to the following harvesting step (116, 118) in response to the spatial crop biomass and crop moisture content characteristics map and in response to a weather...

  13. Optimized first-order methods for smooth convex minimization.

    Science.gov (United States)

    Kim, Donghwan; Fessler, Jeffrey A

    2016-09-01

    We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle [5] recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods [15], and Nesterov's fast gradient methods [10,12]. However, the numerical method in [5] is computationally expensive for large N, and the corresponding numerically optimized first-order algorithm in [5] requires impractical memory and computation for large-scale optimization problems. In this paper, we propose optimized first-order algorithms that achieve a convergence bound that is two times smaller than for Nesterov's fast gradient methods; our bound is found analytically and refines the numerical bound in [5]. Furthermore, the proposed optimized first-order methods have efficient forms that are remarkably similar to Nesterov's fast gradient methods.

  14. Bellman – Ford Method for Solving the Optimal Route Problem

    Directory of Open Access Journals (Sweden)

    Laima Greičiūnė

    2014-12-01

    Full Text Available The article aims to adapt the dynamic programming method for optimal route determination using real-time data on ITS equipment. For this purpose, VBA code has been applied for solving the Bellman - Ford method for an optimal route considering optimality criteria for time, distance and the amount of emissions.

  15. A Review of Design Optimization Methods for Electrical Machines

    Directory of Open Access Journals (Sweden)

    Gang Lei

    2017-11-01

    Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.

  16. Topology Optimization Methods for Acoustic-Mechanical Coupling Problems

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Dilgen, Cetin Batur; Dilgen, Sümer Bartug

    2017-01-01

    A comparative overview of methods for topology optimization of acoustic mechanical coupling problems is provided. The goal is to pave the road for developing efficient optimization schemes for the design of complex acoustic devices such as hearingaids.......A comparative overview of methods for topology optimization of acoustic mechanical coupling problems is provided. The goal is to pave the road for developing efficient optimization schemes for the design of complex acoustic devices such as hearingaids....

  17. 3D stereophotogrammetry in upper-extremity lymphedema: An accurate diagnostic method

    NARCIS (Netherlands)

    Hameeteman, M.; Verhulst, A.C.; Vreeken, R.D.; Maal, T.J.; Ulrich, D.J.

    2016-01-01

    BACKGROUND: Upper-extremity lymphedema is a frequent complication in patients treated for breast cancer. Current diagnostic methods for the upper-extremity volume measurements are cumbersome or time consuming. The purpose of this study was to assess the validity and reliability of three-dimensional

  18. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  19. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  20. A simple method to optimize HMC performance

    CERN Document Server

    Bussone, Andrea; Drach, Vincent; Hansen, Martin; Hietanen, Ari; Rantaharju, Jarno; Pica, Claudio

    2016-01-01

    We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.

  1. A method for aggregating external operating conditions in multi-generation system optimization models

    DEFF Research Database (Denmark)

    Lythcke-Jørgensen, Christoffer Ernst; Münster, Marie; Ensinas, Adriano Viana

    2016-01-01

    This paper presents a novel, simple method for reducing external operating condition datasets to be used in multi-generation system optimization models. The method, called the Characteristic Operating Pattern (CHOP) method, is a visually-based aggregation method that clusters reference data based...... on parameter values rather than time of occurrence, thereby preserving important information on short-term relations between the relevant operating parameters. This is opposed to commonly used methods where data are averaged over chronological periods (months or years), and extreme conditions are hidden...... in the averaged values. The CHOP method is tested in a case study where the operation of a fictive Danish combined heat and power plant is optimized over a historical 5-year period. The optimization model is solved using the full external operating condition dataset, a reduced dataset obtained using the CHOP...

  2. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  3. Temporal and spatial characteristics of extreme precipitation events in the Midwest of Jilin Province based on multifractal detrended fluctuation analysis method and copula functions

    Science.gov (United States)

    Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu

    2017-10-01

    Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.

  4. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  5. An optimization method for metamorphic mechanisms based on multidisciplinary design optimization

    Directory of Open Access Journals (Sweden)

    Zhang Wuxiang

    2014-12-01

    Full Text Available The optimization of metamorphic mechanisms is different from that of the conventional mechanisms for its characteristics of multi-configuration. There exist complex coupled design variables and constraints in its multiple different configuration optimization models. To achieve the compatible optimized results of these coupled design variables, an optimization method for metamorphic mechanisms is developed in the paper based on the principle of multidisciplinary design optimization (MDO. Firstly, the optimization characteristics of the metamorphic mechanism are summarized distinctly by proposing the classification of design variables and constraints as well as coupling interactions among its different configuration optimization models. Further, collaborative optimization technique which is used in MDO is adopted for achieving the overall optimization performance. The whole optimization process is then proposed by constructing a two-level hierarchical scheme with global optimizer and configuration optimizer loops. The method is demonstrated by optimizing a planar five-bar metamorphic mechanism which has two configurations, and results show that it can achieve coordinated optimization results for the same parameters in different configuration optimization models.

  6. Fatigue testing of materials under extremal conditions by acoustic method

    NARCIS (Netherlands)

    Baranov, VM; Bibilashvili, YK; Karasevich, VA; Sarychev, GA

    2004-01-01

    Increasing fuel cycle time requires fatigue testing of the fuel clad materials for nuclear reactors. The standard high-temperature fatigue tests are complicated and tedious. Solving this task is facilitated by the proposed acoustic method, which ensures observation of the material damage dynamics,

  7. Review of design optimization methods for turbomachinery aerodynamics

    Science.gov (United States)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  8. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  9. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  10. Methods of dilatometric investigations under extreme conditions and the case of spin-ice compounds

    Science.gov (United States)

    Doerr, M.; Granovsky, S.; Rotter, M.; Stöter, T.; Wang, Z.-S.; Zherlitsyn, S.; Wosnitza, J.

    2017-10-01

    We give an overview on how dilatometric methods have been developed in the last decade. The concept of capacitive dilatometry was successfully adapted to dilution refrigerators with a resolution of 10‑9. Miniaturized dilatometers with an overall diameter of 18 mm or less are optimally suited for measuring longitudinal and transversal components of the striction tensor. Going to another extreme, to the highest (pulsed) fields, optical methods, such as the FBG technology, were developed for investigations up to 100 T. As examples for utilizing dilatometry at low temperatures we show results for the spin-ice materials Dy2Ti2O7 and Ho2Ti2O7. To characterise the magneto-elastic coupling in these materials, we investigated the thermal expansion and magnetostriction between 80 mK and 15 K and in magnetic fields aligned along the [111] direction and found field-induced phases and strong correlations below 500 mK. Our data demonstrate, that the formation of the field-induced phase is strongly influenced by lattice distortions: any change in interatomic distances will result in a variation of the exchange couplings.

  11. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    Energy Technology Data Exchange (ETDEWEB)

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H. [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Carestream Health, Rochester, New York 14615 (United States); The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, Maryland 21287 (United States); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2011-08-15

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a {approx}55 cm source-to-detector distance; 1.3 magnification; a

  12. Using methods of stretching lower extremities amongst patients with vertebrogenic algic syndrome

    OpenAIRE

    Kocsis, Ágnes

    2016-01-01

    Name: Ágnes Kocsis Supervisor: Bc. Monika Tichá Title: Using methods of stretching lower extremities amongst patients with vertebrogenic algic syndrome Abstract: The subject of this work is to evaluate the effect of stretching lower extremities amongst patients with back pain. The theoretical part deals with different kinds of stretching, guidelines and new methods of stretching. I focused on the stretching of the fascial structures and influencing shortened muscles using yoga exercises. I su...

  13. METHOD FOR OPTIMIZING THE ENERGY OF PUMPS

    NARCIS (Netherlands)

    Skovmose Kallesøe, Carsten; De Persis, Claudio

    2013-01-01

    The device for energy-optimization on operation of several centrifugal pumps controlled in rotational speed, in a hydraulic installation, begins firstly with determining which pumps as pilot pumps are assigned directly to a consumer and which pumps are hydraulically connected in series upstream of

  14. Statistical experimental methods for optimizing the cultivating ...

    African Journals Online (AJOL)

    Central composite experimental design and response surface analysis were adopted to derive a statistical model for optimizing the culture conditions. From the obtained results, it can be concluded that the optimum parameters were: temperature, 15.3°C; pH, 5.56; inoculum size, 4%; liquid volume, 70 ml in 250 ml flask; ...

  15. Parallel optimization methods for agile manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Meza, J.C.; Moen, C.D.; Plantenga, T.D.; Spence, P.A.; Tong, C.H. [Sandia National Labs., Livermore, CA (United States); Hendrickson, B.A.; Leland, R.W.; Reese, G.M. [Sandia National Labs., Albuquerque, NM (United States)

    1997-08-01

    The rapid and optimal design of new goods is essential for meeting national objectives in advanced manufacturing. Currently almost all manufacturing procedures involve the determination of some optimal design parameters. This process is iterative in nature and because it is usually done manually it can be expensive and time consuming. This report describes the results of an LDRD, the goal of which was to develop optimization algorithms and software tools that will enable automated design thereby allowing for agile manufacturing. Although the design processes vary across industries, many of the mathematical characteristics of the problems are the same, including large-scale, noisy, and non-differentiable functions with nonlinear constraints. This report describes the development of a common set of optimization tools using object-oriented programming techniques that can be applied to these types of problems. The authors give examples of several applications that are representative of design problems including an inverse scattering problem, a vibration isolation problem, a system identification problem for the correlation of finite element models with test data and the control of a chemical vapor deposition reactor furnace. Because the function evaluations are computationally expensive, they emphasize algorithms that can be adapted to parallel computers.

  16. Augmented Lagrangian Method For Discretized Optimal Control ...

    African Journals Online (AJOL)

    In this paper, we are concerned with one-dimensional time invariant optimal control problem, whose objective function is quadratic and the dynamical system is a differential equation with initial condition .Since most real life problems are nonlinear and their analytical solutions are not readily available, we resolve to ...

  17. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Y.; Borland, Michael

    2017-06-25

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  18. Optimization of bioethanol production from carbohydrate rich wastes by extreme thermophilic microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Tomas, A.F.

    2013-05-15

    Second-generation bioethanol is produced from residual biomass such as industrial and municipal waste or agricultural and forestry residues. However, Saccharomyces cerevisiae, the microorganism currently used in industrial first-generation bioethanol production, is not capable of converting all of the carbohydrates present in these complex substrates into ethanol. This is in particular true for pentose sugars such as xylose, generally the second major sugar present in lignocellulosic biomass. The transition of second-generation bioethanol production from pilot to industrial scale is hindered by the recalcitrance of the lignocellulosic biomass, and by the lack of a microorganism capable of converting this feedstock to bioethanol with high yield, efficiency and productivity. In this study, a new extreme thermophilic ethanologenic bacterium was isolated from household waste. When assessed for ethanol production from xylose, an ethanol yield of 1.39 mol mol-1 xylose was obtained. This represents 83 % of the theoretical ethanol yield from xylose and is to date the highest reported value for a native, not genetically modified microorganism. The bacterium was identified as a new member of the genus Thermoanaerobacter, named Thermoanaerobacter pentosaceus and was subsequently used to investigate some of the factors that influence secondgeneration bioethanol production, such as initial substrate concentration and sensitivity to inhibitors. Furthermore, T. pentosaceus was used to develop and optimize bioethanol production from lignocellulosic biomass using a range of different approaches, including combination with other microorganisms and immobilization of the cells. T. pentosaceus could produce ethanol from a wide range of substrates without the addition of nutrients such as yeast extract and vitamins to the medium. It was initially sensitive to concentrations of 10 g l-1 of xylose and 1 % (v/v) ethanol. However, long term repeated batch cultivation showed that the strain

  19. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  20. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    Science.gov (United States)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum

  1. Identification of optimal control compartments for serial near-infrared spectroscopy assessment of lower extremity compartmental perfusion.

    Science.gov (United States)

    Jackson, Keith; Cole, Ashley; Potter, Benjamin K; Shuler, Michael; Kinsey, Tracy; Freedman, Brett

    2013-01-01

    Near-infrared spectroscopy (NIRS) has shown promise in detecting ischemic changes in acute compartment syndrome. The objectives of this study were to 1) assess the correlation in NIRS values between upper and lower extremity control sites for bilateral lower extremity trauma and 2) investigate the effect of skin pigmentation on NIRS values. Forty-four volunteers (14 male, 30 female) were monitored over separate 1-hour sessions. NIRS leads were placed over leg and upper extremity compartments. Colorimeters were used to document skin pigmentation. NIRS values between corresponding contralateral compartments were extremely well correlated (r = 0.76-0.90). Upper extremity NIRS values were correlated to leg values in the following order: volar (r = 0.65-0.71), dorsal (r = 0.36-0.60), and deltoid (r = 0.42-0.51). A negative correlation was observed between melanin and NIRS values. Analogous leg compartments are the optimal site of control for each other. The volar forearm may be the best upper extremity control. Skin pigmentation may affect absolute NIRS values.

  2. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  3. Genomic Methods and Microbiological Technologies for Profiling Novel and Extreme Environments for the Extreme Microbiome Project (XMP).

    Science.gov (United States)

    Tighe, Scott; Afshinnekoo, Ebrahim; Rock, Tara M; McGrath, Ken; Alexander, Noah; McIntyre, Alexa; Ahsanuddin, Sofia; Bezdan, Daniela; Green, Stefan J; Joye, Samantha; Stewart Johnson, Sarah; Baldwin, Don A; Bivens, Nathan; Ajami, Nadim; Carmical, Joseph R; Herriott, Ian Charold; Colwell, Rita; Donia, Mohamed; Foox, Jonathan; Greenfield, Nick; Hunter, Tim; Hoffman, Jessica; Hyman, Joshua; Jorgensen, Ellen; Krawczyk, Diana; Lee, Jodie; Levy, Shawn; Garcia-Reyero, Natàlia; Settles, Matthew; Thomas, Kelley; Gómez, Felipe; Schriml, Lynn; Kyrpides, Nikos; Zaikova, Elena; Penterman, Jon; Mason, Christopher E

    2017-04-01

    The Extreme Microbiome Project (XMP) is a project launched by the Association of Biomolecular Resource Facilities Metagenomics Research Group (ABRF MGRG) that focuses on whole genome shotgun sequencing of extreme and unique environments using a wide variety of biomolecular techniques. The goals are multifaceted, including development and refinement of new techniques for the following: 1) the detection and characterization of novel microbes, 2) the evaluation of nucleic acid techniques for extremophilic samples, and 3) the identification and implementation of the appropriate bioinformatics pipelines. Here, we highlight the different ongoing projects that we have been working on, as well as details on the various methods we use to characterize the microbiome and metagenome of these complex samples. In particular, we present data of a novel multienzyme extraction protocol that we developed, called Polyzyme or MetaPolyZyme. Presently, the XMP is characterizing sample sites around the world with the intent of discovering new species, genes, and gene clusters. Once a project site is complete, the resulting data will be publically available. Sites include Lake Hillier in Western Australia, the "Door to Hell" crater in Turkmenistan, deep ocean brine lakes of the Gulf of Mexico, deep ocean sediments from Greenland, permafrost tunnels in Alaska, ancient microbial biofilms from Antarctica, Blue Lagoon Iceland, Ethiopian toxic hot springs, and the acidic hypersaline ponds in Western Australia.

  4. Optimization and control methods in industrial engineering and construction

    CERN Document Server

    Wang, Xiangyu

    2014-01-01

    This book presents recent advances in optimization and control methods with applications to industrial engineering and construction management. It consists of 15 chapters authored by recognized experts in a variety of fields including control and operation research, industrial engineering, and project management. Topics include numerical methods in unconstrained optimization, robust optimal control problems, set splitting problems, optimum confidence interval analysis, a monitoring networks optimization survey, distributed fault detection, nonferrous industrial optimization approaches, neural networks in traffic flows, economic scheduling of CCHP systems, a project scheduling optimization survey, lean and agile construction project management, practical construction projects in Hong Kong, dynamic project management, production control in PC4P, and target contracts optimization.   The book offers a valuable reference work for scientists, engineers, researchers and practitioners in industrial engineering and c...

  5. Full-step interior-point methods for symmetric optimization

    NARCIS (Netherlands)

    Gu, G.

    2009-01-01

    In [SIAM J. Optim., 16(4):1110--1136 (electronic), 2006] Roos proposed a full-Newton step Infeasible Interior-Point Method (IIPM) for Linear Optimization (LO). It is a primal-dual homotopy method; it differs from the classical IIPMs in that it uses only full steps. This means that no line searches

  6. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  7. Acceleration Methods for Classic Convex Optimization Algorithms

    OpenAIRE

    Torres Barrán, Alberto

    2017-01-01

    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura : 12-09-2017 Most Machine Learning models are defined in terms of a convex optimization problem. Thus, developing algorithms to quickly solve such problems its of great interest to the field. We focus in this thesis on two of the most widely used models, the Lasso and Support Vector Machines. The former belongs to the family of r...

  8. A new simulation method for turbines in wake - Applied to extreme response during operation

    DEFF Research Database (Denmark)

    Thomsen, K.; Aagaard Madsen, H.

    2005-01-01

    be suitable for fatigue load simulation. For extreme response during operation the success of this simplified approach depends significantly on the physical mechanism causing the extremes. If the physical mechanism creating increased loads in wake operation is different from an increased turbulence intensity...... and load response characteristics for these loads in wake conditions in good agreement with measurements. The results are compared with the traditionally used simplified method, and this approach seems conservative for some loads, e.g. the extreme blade moments, and non-conservative for others, e...

  9. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  10. Present-day Problems and Methods of Optimization in Mechatronics

    Directory of Open Access Journals (Sweden)

    Tarnowski Wojciech

    2017-06-01

    Full Text Available It is justified that design is an inverse problem, and the optimization is a paradigm. Classes of design problems are proposed and typical obstacles are recognized. Peculiarities of the mechatronic designing are specified as a proof of a particle importance of optimization in the mechatronic design. Two main obstacles of optimization are discussed: a complexity of mathematical models and an uncertainty of the value system, in concrete case. Then a set of non-standard approaches and methods are presented and discussed, illustrated by examples: a fuzzy description, a constraint-based iterative optimization, AHP ranking method and a few MADM functions in Matlab.

  11. Optimization of an on-board imaging system for extremely rapid radiation therapy

    Science.gov (United States)

    Cherry Kemmerling, Erica M.; Wu, Meng; Yang, He; Maxim, Peter G.; Loo, Billy W.; Fahrig, Rebecca

    2015-01-01

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  12. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  13. Structural Optimization Using the Newton Modified Barrier Method

    Science.gov (United States)

    Khot, N. S.; Polyak, R. A.; Schneur, R.; Berke, L.

    1995-01-01

    The Newton Modified Barrier Method (NMBM) is applied to structural optimization problems with large a number of design variables and constraints. This nonlinear mathematical programming algorithm was based on the Modified Barrier Function (MBF) theory and the Newton method for unconstrained optimization. The distinctive feature of the NMBM method is the rate of convergence that is due to the fact that the design remains in the Newton area after each Lagrange multiplier update. This convergence characteristic is illustrated by application to structural problems with a varying number of design variables and constraints. The results are compared with those obtained by optimality criteria (OC) methods and by the ASTROS program.

  14. Augmented Lagrangian Method For Discretized Optimal Control ...

    African Journals Online (AJOL)

    With the aid of Augmented Lagrangian method, a quadratic function with a control operator (penalized matrix) amenable to conjugate gradient method is generated. Numerical experiments verify the efficiency of the proposed technique which compares much more favourably to the existing scheme. Keywords: Trapezoidal ...

  15. Optimizing Usability Studies by Complementary Evaluation Methods

    NARCIS (Netherlands)

    Schmettow, Martin; Bach, Cedric; Scapin, Dominique

    2014-01-01

    This paper examines combinations of complementary evaluation methods as a strategy for efficient usability problem discovery. A data set from an earlier study is re-analyzed, involving three evaluation methods applied to two virtual environment applications. Results of a mixed-effects logistic

  16. Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method

    Science.gov (United States)

    Huang, Feng; Li, Jing

    2017-12-01

    The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.

  17. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  18. Flexible and generalized uncertainty optimization theory and methods

    CERN Document Server

    Lodwick, Weldon A

    2017-01-01

    This book presents the theory and methods of flexible and generalized uncertainty optimization. Particularly, it describes the theory of generalized uncertainty in the context of optimization modeling. The book starts with an overview of flexible and generalized uncertainty optimization. It covers uncertainties that are both associated with lack of information and that more general than stochastic theory, where well-defined distributions are assumed. Starting from families of distributions that are enclosed by upper and lower functions, the book presents construction methods for obtaining flexible and generalized uncertainty input data that can be used in a flexible and generalized uncertainty optimization model. It then describes the development of such a model in detail. All in all, the book provides the readers with the necessary background to understand flexible and generalized uncertainty optimization and develop their own optimization model. .

  19. Optimization of electret film forming method

    Science.gov (United States)

    Łowkis, B.; Kupracz, J.

    2016-02-01

    The aim of this investigation was to develop a method of fabrication of electrets made from PTFE foil of a 0.5 mm thickness, from Ensinger GmbH. Parameters allowing assessment of the electrets in terms of technical applications are: charge lifetime τ and equivalent voltage Uz . Measurements of electrets equivalent voltage Uz has been carried out using compensation method. Assessment of lifetime required employing a method utilizing thermal stimulation of charge relaxation process. In the investigation, a continuous measurement of equivalent voltage in the condition of linear increase in temperature - TSUz was realized. It was stated that electret properties of a foil depend on the preparation conditions of the material and the forming temperature. The employed method of lifetime measurement enables verification of the applied procedure.

  20. A Fractional Trust Region Method for Linear Equality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Honglan Zhu

    2016-01-01

    Full Text Available A quasi-Newton trust region method with a new fractional model for linearly constrained optimization problems is proposed. We delete linear equality constraints by using null space technique. The fractional trust region subproblem is solved by a simple dogleg method. The global convergence of the proposed algorithm is established and proved. Numerical results for test problems show the efficiency of the trust region method with new fractional model. These results give the base of further research on nonlinear optimization.

  1. Software for optimization using a sequential simplex method

    Directory of Open Access Journals (Sweden)

    Evandro Bona

    2000-05-01

    Full Text Available A computer program for process optimization influenced by continuous and qualitative variables was developed from the simplex method. Software was validated through case studies found in literature by predictive models with two distinct processes. The obtained results showed great concordance with values supplied by literature. The developed program is portable and friendly, and may be used in several optimization systems. Software complementation with other subroutines, as combined response optimization, may make its application more comprehensive.

  2. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  3. Choice of optimal working fluid for binary power plants at extremely low temperature brine

    Science.gov (United States)

    Tomarov, G. V.; Shipkov, A. A.; Sorokina, E. V.

    2016-12-01

    The geothermal energy development problems based on using binary power plants utilizing lowpotential geothermal resources are considered. It is shown that one of the possible ways of increasing the efficiency of heat utilization of geothermal brine in a wide temperature range is the use of multistage power systems with series-connected binary power plants based on incremental primary energy conversion. Some practically significant results of design-analytical investigations of physicochemical properties of various organic substances and their influence on the main parameters of the flowsheet and the technical and operational characteristics of heat-mechanical and heat-exchange equipment for binary power plant operating on extremely-low temperature geothermal brine (70°C) are presented. The calculation results of geothermal brine specific flow rate, capacity (net), and other operation characteristics of binary power plants with the capacity of 2.5 MW at using various organic substances are a practical interest. It is shown that the working fluid selection significantly influences on the parameters of the flowsheet and the operational characteristics of the binary power plant, and the problem of selection of working fluid is in the search for compromise based on the priorities in the field of efficiency, safety, and ecology criteria of a binary power plant. It is proposed in the investigations on the working fluid selection of the binary plant to use the plotting method of multiaxis complex diagrams of relative parameters and characteristic of binary power plants. Some examples of plotting and analyzing these diagrams intended to choose the working fluid provided that the efficiency of geothermal brine is taken as main priority.

  4. Multi-objective Optimization for the Robust Performance of Drinking Water Treatment Plants under Climate Change and Climate Extremes

    Science.gov (United States)

    Raseman, W. J.; Kasprzyk, J. R.; Rosario-Ortiz, F.; Summers, R. S.; Stewart, J.; Livneh, B.

    2016-12-01

    To promote public health, the United States Environmental Protection Agency (US EPA), and similar entities around the world enact strict laws to regulate drinking water quality. These laws, such as the Stage 1 and 2 Disinfectants and Disinfection Byproducts (D/DBP) Rules, come at a cost to water treatment plants (WTPs) which must alter their operations and designs to meet more stringent standards and the regulation of new contaminants of concern. Moreover, external factors such as changing influent water quality due to climate extremes and climate change, may force WTPs to adapt their treatment methods. To grapple with these issues, decision support systems (DSSs) have been developed to aid WTP operation and planning. However, there is a critical need to better address long-term decision making for WTPs. In this poster, we propose a DSS framework for WTPs for long-term planning, which improves upon the current treatment of deep uncertainties within the overall potable water system including the impact of climate on influent water quality and uncertainties in treatment process efficiencies. We present preliminary results exploring how a multi-objective evolutionary algorithm (MOEA) search can be coupled with models of WTP processes to identify high-performing plans for their design and operation. This coupled simulation-optimization technique uses Borg MOEA, an auto-adaptive algorithm, and the Water Treatment Plant Model, a simulation model developed by the US EPA to assist in creating the D/DBP Rules. Additionally, Monte Carlo sampling methods were used to study the impact of uncertainty of influent water quality on WTP decision-making and generate plans for robust WTP performance.

  5. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Hundecha, Y.; Lawrence, D.

    2015-01-01

    Information on extreme precipitation for future climate is needed to assess the changes in the frequency and intensity of flooding. The primary source of information in climate change impact studies is climate model projections. However, due to the coarse resolution and biases of these models......), three are bias correction (BC) methods, and one is a perfect prognosis method. The eight methods are used to downscale precipitation output from 15 regional climate models (RCMs) from the ENSEMBLES project for 11 catchments in Europe. The overall results point to an increase in extreme precipitation...... that at least 30% and up to approximately half of the total variance is derived from the SDMs. This study illustrates the large variability in the expected changes in extreme precipitation and highlights the need for considering an ensemble of both SDMs and climate models. Recommendations are provided...

  6. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  7. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this disease. QuLine is a computer tool capable of defining genetic models, breeding strategies and predicting parental selection using known gene information.

  8. Exact and useful optimization methods for microeconomics

    NARCIS (Netherlands)

    Balder, E.J.|info:eu-repo/dai/nl/21623896X

    2011-01-01

    This paper points out that the treatment of utility maximization in current textbooks on microeconomic theory is deficient in at least three respects: breadth of coverage, completeness-cum-coherence of solution methods and mathematical correctness. Improvements are suggested in the form of a

  9. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    OpenAIRE

    Sen Zhang; Yongquan Zhou

    2015-01-01

    One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO), inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving cluster...

  10. Algorithmic Methods for Optimization in Public Transport (Dagstuhl Seminar 16171)

    OpenAIRE

    Kroon, Leo G.; Schöbel, Anita; Wagner, Dorothea

    2016-01-01

    This report documents the talks and discussions at the Dagstuhl seminar 16171 “Algorithmic Methods for Optimization in Public Transport”. The seminar brought together researchers from algorithm, algorithm engineering, operations research, mathematical optimization and engineering, all interested in algorithms in public transportation. Also several practitioners were able to join the group and brought valuable insights on current practice and challenging problems.

  11. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  12. Optimization of Nanostructuring Burnishing Technological Parameters by Taguchi Method

    Science.gov (United States)

    Kuznetsov, V. P.; Dmitriev, A. I.; Anisimova, G. S.; Semenova, Yu V.

    2016-04-01

    On the basis of application of Taguchi optimization method, an approach for researching influence of nanostructuring burnishing technological parameters, considering the surface layer microhardness criterion, is developed. Optimal values of burnishing force, feed and number of tool passes for hardened steel AISI 420 hardening treatment are defined.

  13. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    Science.gov (United States)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development

  14. The Flight Optimization System Weights Estimation Method

    Science.gov (United States)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  15. Reliability-based design optimization using convex approximations and sequential optimization and reliability assessment method

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2010-01-15

    In this study, an effective method for reliability-based design optimization (RBDO) is proposed enhancing sequential optimization and reliability assessment (SORA) method by convex approximations. In SORA, reliability estimation and deterministic optimization are performed sequentially. The sensitivity and function value of probabilistic constraint at the most probable point (MPP) are obtained in the reliability analysis loop. In this study, the convex approximations for probabilistic constraint are constructed by utilizing the sensitivity and function value of the probabilistic constraint at the MPP. Hence, the proposed method requires much less function evaluations of probabilistic constraints in the deterministic optimization than the original SORA method. The efficiency and accuracy of the proposed method were verified through numerical examples

  16. Danish extreme wind atlas: Background and methods for a WAsP engineering option

    Energy Technology Data Exchange (ETDEWEB)

    Rathmann, O.; Kristensen, L.; Mann, J. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark); Hansen, S.O. [Svend Ole Hansen ApS, Copenhagen (Denmark)

    1999-03-01

    Extreme wind statistics is necessary design information when establishing wind farms and erecting bridges, buildings and other structures in the open air. Normal mean wind statistics in terms of directional and speed distribution may be estimated by wind atlas methods and are used to estimate e.g. annual energy output for wind turbines. It is the purpose of the present work to extend the wind atlas method to also include the local extreme wind statistics so that an extreme value as e.g. the 50-year wind can be estimated at locations of interest. Together with turbulence estimates such information is important regarding the necessary strength of wind turbines or structures to withstand high wind loads. In the `WAsP Engineering` computer program a flow model, which includes a model for the dynamic roughness of water surfaces, is used to realise such an extended wind atlas method. With basis in an extended wind atlas, also containing extreme wind statistics, this allows the program to estimate extreme winds in addition to mean winds and turbulence intensities at specified positions and heights. (au) EFP-97. 15 refs.

  17. OPTIMAL SIGNAL PROCESSING METHODS IN GPR

    Directory of Open Access Journals (Sweden)

    Saeid Karamzadeh

    2014-01-01

    Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.

  18. Numerical methods problem solving optimal of technical systems

    Directory of Open Access Journals (Sweden)

    А.С. Климова

    2006-01-01

    Full Text Available  There are offered some numeral methods of functions eхstremum search for the multicriterial optimization tasks decision. The researches, using experience and possibilities results at the compound technical system optimum planning are presented.

  19. Optimization of Transient Response Radiation of Printed Ultra Wideband Dipole Antennas (Using Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    M. Mazanek

    2007-06-01

    Full Text Available In case of particular ultra wideband applications (i.e. radar, positioning, etc., it is crucial to know the transient responses of antennas. In the first part of the paper, the optimization process searches for the dipole shape that accomplishes two required parameters i.e. a good matching and a minimal distortion. The particle swarm optimization method was used in the process of the dipole shape optimization. As a result, the optimized ultra wideband dipole is perfectly matched. Moreover, it minimally distorts the applied signal. The second part of the paper discusses the influence of the feeding circuit on radiating parameters and on the dipole antenna matching.

  20. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  1. Comparison of optimal design methods in inverse problems

    Science.gov (United States)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  2. Malliavin method for optimal investment in financial markets with memory

    Directory of Open Access Journals (Sweden)

    An Qiguang

    2016-01-01

    Full Text Available We consider a financial market with memory effects in which wealth processes are driven by mean-field stochastic Volterra equations. In this financial market, the classical dynamic programming method can not be used to study the optimal investment problem, because the solution of mean-field stochastic Volterra equation is not a Markov process. In this paper, a new method through Malliavin calculus introduced in [1], can be used to obtain the optimal investment in a Volterra type financial market. We show a sufficient and necessary condition for the optimal investment in this financial market with memory by mean-field stochastic maximum principle.

  3. Optimal poroelastic layer sequencing for sound transmission loss maximization by topology optimization method.

    Science.gov (United States)

    Lee, Joong Seok; Kim, Eun Il; Kim, Yoon Young; Kim, Jung Soo; Kang, Yeon June

    2007-10-01

    Optimal layer sequencing of a multilayered acoustical foam is solved to maximize its sound transmission loss. A foam consisting of air and poroelastic layers can be optimized when a limited amount of a poroelastic material is allowed. By formulating the sound transmission loss maximization problem as a one-dimensional topology optimization problem, optimal layer sequencing and thickness were systematically found for several single and ranges of frequencies. For optimization, the transmission losses of air and poroelastic layers were calculated by the transfer matrix derived from Biot's theory. By interpolating five intrinsic parameters among several poroelastic material parameters, distinct air-poroelastic layer distributions were obtained; no filtering or postprocessing was necessary. The optimized foam layouts by the proposed method were shown to differ depending on the frequency bands of interest.

  4. Method and system for SCR optimization

    Science.gov (United States)

    Lefebvre, Wesley Curt [Boston, MA; Kohn, Daniel W [Cambridge, MA

    2009-03-10

    Methods and systems are provided for controlling SCR performance in a boiler. The boiler includes one or more generally cross sectional areas. Each cross sectional area can be characterized by one or more profiles of one or more conditions affecting SCR performance and be associated with one or more adjustable desired profiles of the one or more conditions during the operation of the boiler. The performance of the boiler can be characterized by boiler performance parameters. A system in accordance with one or more embodiments of the invention can include a controller input for receiving a performance goal for the boiler corresponding to at least one of the boiler performance parameters and for receiving data values corresponding to boiler control variables and to the boiler performance parameters. The boiler control variables include one or more current profiles of the one or more conditions. The system also includes a system model that relates one or more profiles of the one or more conditions in the boiler to the boiler performance parameters. The system also includes an indirect controller that determines one or more desired profiles of the one or more conditions to satisfy the performance goal for the boiler. The indirect controller uses the system model, the received data values and the received performance goal to determine the one or more desired profiles of the one or more conditions. The system model also includes a controller output that outputs the one or more desired profiles of the one or more conditions.

  5. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    Energy Technology Data Exchange (ETDEWEB)

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  6. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  7. Climate change effects on extreme flows of water supply area in Istanbul: utility of regional climate models and downscaling method.

    Science.gov (United States)

    Kara, Fatih; Yucel, Ismail

    2015-09-01

    This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.

  8. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  9. Thickness optimization of fiber reinforced laminated composites using the discrete material optimization method

    DEFF Research Database (Denmark)

    Sørensen, Søren Nørgaard; Lund, Erik

    2012-01-01

    This work concerns a novel large-scale multi-material topology optimization method for simultaneous determination of the optimum variable integer thickness and fiber orientation throughout laminate structures with fixed outer geometries while adhering to certain manufacturing constraints....... The conceptual combinatorial/integer problem is relaxed to a continuous problem and solved on basis of the so-called Discrete Material Optimization method, explicitly including the manufacturing constraints as linear constraints....

  10. A short numerical study on the optimization methods influence on topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Sigmund, Ole; Stolpe, Mathias

    2017-01-01

    2013) is the slow convergence that is often encountered in practice, when an almost solid-and-void design is found. The purpose of this forum article is to present some preliminary observations on how designs evolves during the optimization process for different choices of optimization methods....... Although the discussion is centered on density based methods it may be equally relevant to level-set and phase-field approaches....

  11. Hybrid DFP-CG method for solving unconstrained optimization problems

    Science.gov (United States)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  12. Genomic methods and microbiological technologies for profiling novel and extreme environments for the extreme microbiome project (XMP)

    OpenAIRE

    Tighe, S; Afshinnekoo, E; Rock, TM; McGrath, K; Alexander, N; McIntyre, A; Ahsanuddins, S; Bezdan, D; Green, SJ; Joye, S; Johnson, SS; Baldwin, DA; Bivens, N; Ajami, N; Carmical, JR

    2017-01-01

    © 2017, Association of Biomolecular Resource Facilities. All rights reserved. The Extreme Microbiome Project (XMP) is a project launched by the Association of Biomolecular Resource Facilities Metagenomics Research Group (ABRF MGRG) that focuses on whole genome shotgun sequencing of extreme and unique environments using a wide variety of biomolecular techniques. The goals are multifaceted, including development and refinement of new techniques for the following: 1) the detection and characteri...

  13. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  14. Optimal Power Flow of the Algerian Electrical Network using an Ant Colony Optimization Method

    Directory of Open Access Journals (Sweden)

    Tarek BOUKTIR

    2005-06-01

    Full Text Available This paper presents solution of optimal power flow (OPF problem of a power system via an Ant Colony Optimization Meta-heuristic method. The objective is to minimize the total fuel cost of thermal generating units and also conserve an acceptable system performance in terms of limits on generator real and reactive power outputs, bus voltages, shunt capacitors/reactors, transformers tap-setting and power flow of transmission lines. Simulation results on the Algerian Electrical Network show that the Ant Colony Optimization method converges quickly to the global optimum.

  15. A method to find the 50-year extreme load during production

    NARCIS (Netherlands)

    Bos, R.; Veldkamp, D.

    2016-01-01

    An important yet difficult task in the design of wind turbines is to assess the extreme load behaviour, most notably finding the 50-year load. Where existing methods often focus on ways to extrapolate from small sample sizes, this paper proposes a different approach. It combines generating

  16. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity

    NARCIS (Netherlands)

    Pellikaan, P.; Krogt, M.M. van der; Carbone, V.; Fluit, R.; Vigneron, L.M.; Deun, J. Van; Verdonschot, N.J.J.; Koopman, H.F.J.M.

    2014-01-01

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based

  17. Surveillance of extreme hyperbilirubinaemia in Denmark. A method to identify the newborn infants

    DEFF Research Database (Denmark)

    Bjerre, J.V.; Petersen, Jes Reinholdt; Ebbesen, F.

    2008-01-01

    AIM: To describe the incidence of infants born at term or near-term with extreme hyperbilirubinaemia. METHODS: The study period was between 1 January 2002 and 31 December 2005, and included all infants born alive at term or near-term in Denmark. Medical reports on all newborn infants with a total...

  18. On some other preferred method for optimizing the welded joint

    Directory of Open Access Journals (Sweden)

    Pejović Branko B.

    2016-01-01

    Full Text Available The paper shows an example of performed optimization of sizes in terms of welding costs in a characteristic loaded welded joint. Hence, in the first stage, the variables and constant parameters are defined, and mathematical shape of the optimization function is determined. The following stage of the procedure defines and places the most important constraint functions that limit the design of structures, that the technologist and the designer should take into account. Subsequently, a mathematical optimization model of the problem is derived, that is efficiently solved by a proposed method of geometric programming. Further, a mathematically based thorough optimization algorithm is developed of the proposed method, with a main set of equations defining the problem that are valid under certain conditions. Thus, the primary task of optimization is reduced to the dual task through a corresponding function, which is easier to solve than the primary task of the optimized objective function. The main reason for this is a derived set of linear equations. Apparently, a correlation is used between the optimal primary vector that minimizes the objective function and the dual vector that maximizes the dual function. The method is illustrated on a computational practical example with a different number of constraint functions. It is shown that for the case of a lower level of complexity, a solution is reached through an appropriate maximization of the dual function by mathematical analysis and differential calculus.

  19. Extreme weather exposure identification for road networks - a comparative assessment of statistical methods

    Science.gov (United States)

    Schlögl, Matthias; Laaha, Gregor

    2017-04-01

    The assessment of road infrastructure exposure to extreme weather events is of major importance for scientists and practitioners alike. In this study, we compare the different extreme value approaches and fitting methods with respect to their value for assessing the exposure of transport networks to extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series (PDS) over the standardly used annual maxima series (AMS) in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing PDS) being superior to the block maxima approach (employing AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was visible from neither the square-root criterion nor standardly used graphical diagnosis (mean residual life plot) but rather from a direct comparison of AMS and PDS in combined quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best-suited approach. This will make the analyses more robust, not only in cases where threshold selection and dependency introduces biases to the PDS approach but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of

  20. Inferences on weather extremes and weather-related disasters: a review of statistical methods

    Directory of Open Access Journals (Sweden)

    H. Visser

    2012-02-01

    Full Text Available The study of weather extremes and their impacts, such as weather-related disasters, plays an important role in research of climate change. Due to the great societal consequences of extremes – historically, now and in the future – the peer-reviewed literature on this theme has been growing enormously since the 1980s. Data sources have a wide origin, from century-long climate reconstructions from tree rings to relatively short (30 to 60 yr databases with disaster statistics and human impacts.

    When scanning peer-reviewed literature on weather extremes and its impacts, it is noticeable that many different methods are used to make inferences. However, discussions on these methods are rare. Such discussions are important since a particular methodological choice might substantially influence the inferences made. A calculation of a return period of once in 500 yr, based on a normal distribution will deviate from that based on a Gumbel distribution. And the particular choice between a linear or a flexible trend model might influence inferences as well.

    In this article, a concise overview of statistical methods applied in the field of weather extremes and weather-related disasters is given. Methods have been evaluated as to stationarity assumptions, the choice for specific probability density functions (PDFs and the availability of uncertainty information. As for stationarity assumptions, the outcome was that good testing is essential. Inferences on extremes may be wrong if data are assumed stationary while they are not. The same holds for the block-stationarity assumption. As for PDF choices it was found that often more than one PDF shape fits to the same data. From a simulation study the conclusion can be drawn that both the generalized extreme value (GEV distribution and the log-normal PDF fit very well to a variety of indicators. The application of the normal and Gumbel distributions is more limited. As for uncertainty, it is

  1. Using necessary optimality conditions for acceleration of the nonuniform covering optimization method

    Directory of Open Access Journals (Sweden)

    Evtushenko Yury

    2016-01-01

    Full Text Available Paper deals with the non-uniform covering method that is aimed at deterministic global optimization. This method finds a feasible solution to the optimization problem numerically and proves that the obtained solution differs from the optimal by no more than a given accuracy. Numerical proof consists of constructing a set of covering sets - the coverage. The number of elements in the coverage can be very large and even exceed the total amount of available computer resources. Basic method of coverage construction is the comparison of upper and lower bounds on the value of the objective function. In this work we propose to use necessary optimality conditions of first and second order for reducing the search for boxconstrained problems. We provide the algorithm description and prove its correctness. The efficiency of the proposed approach is studied on test problems.

  2. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  3. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    Science.gov (United States)

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  4. RELATIVE CAMERA POSE ESTIMATION METHOD USING OPTIMIZATION ON THE MANIFOLD

    Directory of Open Access Journals (Sweden)

    C. Cheng

    2017-05-01

    Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  5. Method to describe stochastic dynamics using an optimal coordinate.

    Science.gov (United States)

    Krivov, Sergei V

    2013-12-01

    A general method to describe the stochastic dynamics of Markov processes is suggested. The method aims to solve three related problems: the determination of an optimal coordinate for the description of stochastic dynamics; the reconstruction of time from an ensemble of stochastic trajectories; and the decomposition of stationary stochastic dynamics into eigenmodes which do not decay exponentially with time. The problems are solved by introducing additive eigenvectors which are transformed by a stochastic matrix in a simple way - every component is translated by a constant distance. Such solutions have peculiar properties. For example, an optimal coordinate for stochastic dynamics with detailed balance is a multivalued function. An optimal coordinate for a random walk on a line corresponds to the conventional eigenvector of the one-dimensional Dirac equation. The equation for the optimal coordinate in a slowly varying potential reduces to the Hamilton-Jacobi equation for the action function.

  6. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid

    2014-01-01

    There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.

  7. A Finite Element Removal Method for 3D Topology Optimization

    Directory of Open Access Journals (Sweden)

    M. Akif Kütük

    2013-01-01

    Full Text Available Topology optimization provides great convenience to designers during the designing stage in many industrial applications. With this method, designers can obtain a rough model of any part at the beginning of a designing stage by defining loading and boundary conditions. At the same time the optimization can be used for the modification of a product which is being used. Lengthy solution time is a disadvantage of this method. Therefore, the method cannot be widespread. In order to eliminate this disadvantage, an element removal algorithm has been developed for topology optimization. In this study, the element removal algorithm is applied on 3-dimensional parts, and the results are compared with the ones available in the related literature. In addition, the effects of the method on solution times are investigated.

  8. Numerical methods for optimal control problems with state constraints

    CERN Document Server

    Pytlak, Radosław

    1999-01-01

    While optimality conditions for optimal control problems with state constraints have been extensively investigated in the literature the results pertaining to numerical methods are relatively scarce. This book fills the gap by providing a family of new methods. Among others, a novel convergence analysis of optimal control algorithms is introduced. The analysis refers to the topology of relaxed controls only to a limited degree and makes little use of Lagrange multipliers corresponding to state constraints. This approach enables the author to provide global convergence analysis of first order and superlinearly convergent second order methods. Further, the implementation aspects of the methods developed in the book are presented and discussed. The results concerning ordinary differential equations are then extended to control problems described by differential-algebraic equations in a comprehensive way for the first time in the literature.

  9. Optimization based inversion method for the inverse heat conduction problems

    Science.gov (United States)

    Mu, Huaiping; Li, Jingtao; Wang, Xueyao; Liu, Shi

    2017-05-01

    Precise estimation of the thermal physical properties of materials, boundary conditions, heat flux distributions, heat sources and initial conditions is highly desired for real-world applications. The inverse heat conduction problem (IHCP) analysis method provides an alternative approach for acquiring such parameters. The effectiveness of the inversion algorithm plays an important role in practical applications of the IHCP method. Different from traditional inversion models, in this paper a new inversion model that simultaneously highlights the measurement errors and the inaccurate properties of the forward problem is proposed to improve the inversion accuracy and robustness. A generalized cost function is constructed to convert the original IHCP into an optimization problem. An iterative scheme that splits a complicated optimization problem into several simpler sub-problems and integrates the superiorities of the alternative optimization method and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is developed for solving the proposed cost function. Numerical experiment results validate the effectiveness of the proposed inversion method.

  10. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  11. Cost optimal river dike design using probabilistic methods

    NARCIS (Netherlands)

    Bischiniotis, K.; Kanning, W.; Jonkman, S.N.

    2014-01-01

    This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and

  12. On projection methods, convergence and robust formulations in topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence....

  13. Oil Reservoir Production Optimization using Single Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Völcker, Carsten; Frydendall, Jan

    2012-01-01

    are large-scale problems and require specialized numerical algorithms. In this paper, we combine a single shooting optimization algorithm based on sequential quadratic programming (SQP) with explicit singly diagonally implicit Runge-Kutta (ESDIRK) integration methods and the a continuous adjoint method...

  14. Airfoil optimization by using the Manifold Mapping method

    NARCIS (Netherlands)

    M. van der Jagt (Martin)

    2007-01-01

    textabstractIn this report it is investigated if the Manifold Mapping method can be used in airfoil optimization. Before the method can be implemented, a suitable airfoil parametrization must be chosen. Furthermore a coarse and fine model must be assigned. These models are the key to success for the

  15. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    In this study, an attempt was made to propose an optimization method for the quantitative determination of clay minerals in soil based on bulk chemical composition data. The fundamental principles and processes of the calculation are elucidated. Some samples were used for reliability verification of the method and the ...

  16. A new hybrid optimization method inspired from swarm intelligence: Fuzzy adaptive swallow swarm optimization algorithm (FASSO

    Directory of Open Access Journals (Sweden)

    Mehdi Neshat

    2015-11-01

    Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.

  17. A Solution Quality Assessment Method for Swarm Intelligence Optimization Algorithms

    Science.gov (United States)

    Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of “value performance,” the “ordinal performance” is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and “good enough” set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  18. A Combined Method in Parameters Optimization of Hydrocyclone

    Directory of Open Access Journals (Sweden)

    Jing-an Feng

    2016-01-01

    Full Text Available To achieve efficient separation of calcium hydroxide and impurities in carbide slag by using hydrocyclone, the physical granularity property of carbide slag, hydrocyclone operation parameters for slurry concentration, and the slurry velocity inlet are designed to be optimized. The optimization methods are combined with the Design of Experiment (DOE method and the Computational Fluid Dynamics (CFD method. Based on Design Expert software, the central composite design (CCD with three factors and five levels amounting to five groups of 20 test responses was constructed, and the experiments were performed by numerical simulation software FLUENT. Through the analysis of variance deduced from numerical simulation experiment results, the regression equations of pressure drop, overflow concentration, purity, and separation efficiencies of two solid phases were, respectively, obtained. The influences of factors were analyzed by the responses, respectively. Finally, optimized results were obtained by the multiobjective optimization method through the Design Expert software. Based on the optimized conditions, the validation test by numerical simulation and separation experiment were separately proceeded. The results proved that the combined method could be efficiently used in studying the hydrocyclone and it has a good performance in application engineering.

  19. METHOD TO ASSESS THE EXTREME HYDROLOGICAL EVENTS IN DANUBE FLUVIAL DELTA

    Directory of Open Access Journals (Sweden)

    MARIAN MIERLĂ

    2012-03-01

    Full Text Available Method to assess the extreme hydrological events in Danube fluvial Delta. In this paper the subject is about of testing a method for Romania to assess the extreme hydrological events. In this paper through hydrological extreme events it should be understood as the extreme droughts and the extreme flooding. The place to be tested this method for Romania is the Danube Delta, fluvial delta to be more precisely. The importance of the area consists in the fact that is the third Delta of the Europe (after the Volga’s and Kuban’s. The method that is supposed to be tested on a specific part of the delta is aiming to rise the knowledge about the extreme hydrological events (drought and flooding and to be able to respond in an appropriate way to these. For this paper it will be taken into account the hydrological events occurred in 2003 (the exceptional drought and in 2006 (the exceptional flood. To do the analysis there were used satellite images (LANDSAT from the period that was taken into account and additional there were used the hypsometrical model of the Danube Delta for the specific area. The first two datasets (2003 and 2006 satellite images give information about were the border of the water (in drought period and respective in flooding one reached. The second dataset (the delta’s hypsometry give information about the altitude of the terrain in order to establish which areas, at a certain water level, are flooded. The result of these datasets combination is the calibration of the hypsometrical model of the Danube Delta, in that region, regarding the hydrological events in the sense of building-up the hydrograds as isolines. The new approach of this matter can be more concrete and makes easier to see on the cartographic support the hydrologic events. The information obtained from these datasets makes the awareness regarding the extreme hydrological events to be higher and respective the measures taken to mitigate these will be more efficient.

  20. Application of the Most Likely Extreme Response Method for Wave Energy Converters

    Energy Technology Data Exchange (ETDEWEB)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang; Lawson, Michael

    2016-06-24

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based on spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.

  1. Application of the Most Likely Extreme Response Method for Wave Energy Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang; Lawson, Michael

    2016-07-01

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based on spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.

  2. Optimization of the Runner for Extremely Low Head Bidirectional Tidal Bulb Turbine

    Directory of Open Access Journals (Sweden)

    Yongyao Luo

    2017-06-01

    Full Text Available This paper presents a multi-objective optimization procedure for bidirectional bulb turbine runners which is completed using ANSYS Workbench. The optimization procedure is able to check many more geometries with less manual work. In the procedure, the initial blade shape is parameterized, the inlet and outlet angles (β1, β2, as well as the starting and ending wrap angles (θ1, θ2 for the five sections of the blade profile, are selected as design variables, and the optimization target is set to obtain the maximum of the overall efficiency for the ebb and flood turbine modes. For the flow analysis, the ANSYS CFX code, with a SST (Shear Stress Transport k-ω turbulence model, has been used to evaluate the efficiency of the turbine. An efficient response surface model relating the design parameters and the objective functions is obtained. The optimization strategy was used to optimize a model bulb turbine runner. Model tests were carried out to validate the final designs and the design procedure. For the four-bladed turbine, the efficiency improvement is 5.5% in the ebb operation direction, and 2.9% in the flood operation direction, as well as 4.3% and 4.5% for the three-bladed turbine. Numerical simulations were then performed to analyze the pressure pulsation in the pressure and suction sides of the blade for the prototype turbine with optimal four-bladed and three-bladed runners. The results show that the runner rotational frequency (fn is the dominant frequency of the pressure pulsations in the blades for ebb and flood turbine modes, and the gravitational effect, rather than rotor-stator interaction (RSI, plays an important role in a low head horizontal axial turbine. The amplitudes of the pressure pulsations on the blade side facing the guide vanes varies little with the water head. However, the amplitudes of the pressure pulsations on the blade side facing the diffusion tube linearly increase with the water head. These results could provide

  3. Reverse optimization reconstruction method in non-null aspheric interferometry

    Science.gov (United States)

    Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian

    2015-10-01

    Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.

  4. An efficient linear programming method for Optimal Transportation

    OpenAIRE

    Oberman, Adam M.; Ruan, Yuanlong

    2015-01-01

    An efficient method for computing solutions to the Optimal Transportation (OT) problem with a wide class of cost functions is presented. The standard linear programming (LP) discretization of the continuous problem becomes intractible for moderate grid sizes. A grid refinement method results in a linear cost algorithm. Weak convergence of solutions is stablished. Barycentric projection of transference plans is used to improve the accuracy of solutions. The method is applied to more general pr...

  5. METHOD OF CALCULATING THE OPTIMAL HEAT EMISSION GEOTHERMAL WELLS

    Directory of Open Access Journals (Sweden)

    A. I. Akaev

    2015-01-01

    Full Text Available This paper presents a simplified method of calculating the optimal regimes of the fountain and the pumping exploitation of geothermal wells, reducing scaling and corrosion during operation. Comparative characteristics to quantify the heat of formation for these methods of operation under the same pressure at the wellhead. The problem is solved graphic-analytical method based on a balance of pressure in the well with the heat pump. 

  6. Optimization methods and silicon solar cell numerical models

    Science.gov (United States)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  7. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    Science.gov (United States)

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  8. Automated discrete element method calibration using genetic and optimization algorithms

    Science.gov (United States)

    Do, Huy Q.; Aragón, Alejandro M.; Schott, Dingena L.

    2017-06-01

    This research aims at developing a universal methodology for automated calibration of microscopic properties of modelled granular materials. The proposed calibrator can be applied for different experimental set-ups. Two optimization approaches: (1) a genetic algorithm and (2) DIRECT optimization, are used to identify discrete element method input model parameters, e.g., coefficients of sliding and rolling friction. The algorithms are used to minimize the objective function characterized by the discrepancy between the experimental macroscopic properties and the associated numerical results. Two test cases highlight the robustness, stability, and reliability of the two algorithms used for automated discrete element method calibration with different set-ups.

  9. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-08

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  10. Coordinated Optimal Operation Method of the Regional Energy Internet

    Directory of Open Access Journals (Sweden)

    Rishang Long

    2017-05-01

    Full Text Available The development of the energy internet has become one of the key ways to solve the energy crisis. This paper studies the system architecture, energy flow characteristics and coordinated optimization method of the regional energy internet. Considering the heat-to-electric ratio of a combined cooling, heating and power unit, energy storage life and real-time electricity price, a double-layer optimal scheduling model is proposed, which includes economic and environmental benefit in the upper layer and energy efficiency in the lower layer. A particle swarm optimizer–individual variation ant colony optimization algorithm is used to solve the computational efficiency and accuracy. Through the calculation and simulation of the simulated system, the energy savings, level of environmental protection and economic optimal dispatching scheme are realized.

  11. Exergetic optimization of turbofan engine with genetic algorithm method

    Energy Technology Data Exchange (ETDEWEB)

    Turan, Onder [Anadolu University, School of Civil Aviation (Turkey)], e-mail: onderturan@anadolu.edu.tr

    2011-07-01

    With the growth of passenger numbers, emissions from the aeronautics sector are increasing and the industry is now working on improving engine efficiency to reduce fuel consumption. The aim of this study is to present the use of genetic algorithms, an optimization method based on biological principles, to optimize the exergetic performance of turbofan engines. The optimization was carried out using exergy efficiency, overall efficiency and specific thrust of the engine as evaluation criteria and playing on pressure and bypass ratio, turbine inlet temperature and flight altitude. Results showed exergy efficiency can be maximized with higher altitudes, fan pressure ratio and turbine inlet temperature; the turbine inlet temperature is the most important parameter for increased exergy efficiency. This study demonstrated that genetic algorithms are effective in optimizing complex systems in a short time.

  12. THE METHOD OF TREATMENT OF PATIENTS WITH NONUNIONS AND GUNSHOT PSEUDOARTHROSIS OF LONG BONE OF EXTREMITIES

    Directory of Open Access Journals (Sweden)

    B. A. Akhmedov

    2010-01-01

    Full Text Available Developed by the authors the method of treatment for nonunions and pseudoarthrosis of extremities long bones is described. It consist in mini-invasive preparation of interfragmentary space and bone grafting with cancellous graft from the wing of ilium. Successful use this method in 23 patients with gunshot wounds of humeral, forearm, femur and shin bones allows to recommend it for wide application. Suggested method of surgical treatment can be used not only after gunshot wounds, but after long bones fractures of another genesis.

  13. Extreme Wind Calculation Applying Spectral Correction Method – Test and Validation

    DEFF Research Database (Denmark)

    Rathmann, Ole Steen; Hansen, Brian Ohrbeck; Larsén, Xiaoli Guo

    2016-01-01

    We present a test and validation of extreme wind calculation applying the Spectral Correction (SC) method as implemented in a DTU Wind Condition Software. This method can do with a short-term(~1 year) local measured wind data series in combination with a long-term (10-20 years) reference modelled...... wind data series like CFSR and CFDDA reanalysis data for the site in question. The validation of the accuracy was performed by comparing with estimates by the traditional Annual Maxim a (AM) method and the Peak Over Threshold (POT) method, applied to measurements, for six sites: four sites located...... in Denmark, one site located in the Netherlands and one site located in the USA, comprising both on-shore and off-shore sites. The SC method was applied to 1-year measured wind data while the AM and POT methods were applied to long-term measured wind data. Further, the consistency of the SC method...

  14. A Building-Block Favoring Method for the Topology Optimization of Internal Antenna Design

    Directory of Open Access Journals (Sweden)

    Yen-Sheng Chen

    2015-01-01

    Full Text Available This paper proposes a new design technique for internal antenna development. The proposed method is based on the framework of topology optimization incorporated with three effective mechanisms favoring the building blocks of associated optimization problems. Conventionally, the topology optimization of antenna structures discretizes a design space into uniform and rectangular pixels. However, the defining length of the resultant building blocks is so large that the problem difficulty arises; furthermore, the order of the building blocks becomes extremely high, so genetic algorithms (GAs and binary particle swarm optimization (BPSO are not more efficient than the random search algorithm. In order to form tight linkage groups of building blocks, this paper proposes a novel approach to handle the design details. In particular, a nonuniform discretization is adopted to discretize the design space, and the initialization of GAs is assigned as orthogonal arrays (OAs instead of a randomized population; moreover, the control map of GAs is constructed by ensuring the schema growth based on the generalized schema theorem. By using the proposed method, two internal antennas are thus successfully developed. The simulated and measured results show that the proposed technique significantly outperforms the conventional topology optimization.

  15. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    Science.gov (United States)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  16. Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization.

    Science.gov (United States)

    Yuan, Deming; Ho, Daniel W C; Xu, Shengyuan

    2016-09-01

    In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.

  17. Panorama parking assistant system with improved particle swarm optimization method

    Science.gov (United States)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  18. Study on precision optimization method for laser displacement sensors

    Science.gov (United States)

    Bi, Chao; Bao, Longxiang; Wang, Liping; Fang, Jianguo

    2017-10-01

    As the development of the measuring technology, laser displacement sensors become the most commonly used ones in the field of dimensional metrology as a result of their versatility and mature technology. However, as the differences of environment conditions and the variation of measured surfaces, the measuring errors of the laser displacement sensor may be large when used in actual application, in which the nominal accuracy of the laser sensor cannot be reached. Therefore, a precsion optimization method for the laser displacement sensor is proposed in the paper based on analysis of the principle of optical trigonometry, which can be used to reduces the measuring errors. The method is a kind of spatial filtering algorithm based on the self-adjusting domain. On the basis of the idea of spatial filtering, the method could determine the measuring errors and the optimization region according to the different measured surfaces automatically. As the experiment results show, the optimization method could be used to describe the measured object precisely and decrease the measuring error to up to 50%, which may deal with the low accuracy of the optical scanning and measuring task. With the accuracy optimization method proposed in the paper, the sensor can reach the measuring accuracy of micrometer level. Therefore, the measurement of high efficiency and high precision can be achieved.

  19. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  20. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  1. Flexible waveform-constrained optimization design method for cognitive radar

    Science.gov (United States)

    Zhang, Xiaowen; Wang, Kaizhi; Liu, Xingzhao

    2017-07-01

    The problem of waveform optimization design for cognitive radar (CR) in the presence of extended target with unknown target impulse response (TIR) is investigated. On the premise of ensuring the TIR estimation precision, a flexible waveform-constrained optimization design method taking both target detection and range resolution into account is proposed. In this method, both the estimate of TIR and transmitted waveform can be updated according to the environment information fed back by the receiver. Moreover, rather than optimizing waveforms for a single design criterion, the framework can synthesize waveforms that provide a trade-off between competing design criteria. The trade-off is determined by the parameter settings, which can be adjusted according to the requirement of radar performance in each cycle of CR. Simulation results demonstrate that CR with the proposed waveform performs better than a traditional radar system with a fixed waveform and offers more flexibility and practicability.

  2. Control and Optimization Methods for Electric Smart Grids

    CERN Document Server

    Ilić, Marija

    2012-01-01

    Control and Optimization Methods for Electric Smart Grids brings together leading experts in power, control and communication systems,and consolidates some of the most promising recent research in smart grid modeling,control and optimization in hopes of laying the foundation for future advances in this critical field of study. The contents comprise eighteen essays addressing wide varieties of control-theoretic problems for tomorrow’s power grid. Topics covered include: Control architectures for power system networks with large-scale penetration of renewable energy and plug-in vehicles Optimal demand response New modeling methods for electricity markets Control strategies for data centers Cyber-security Wide-area monitoring and control using synchronized phasor measurements. The authors present theoretical results supported by illustrative examples and practical case studies, making the material comprehensible to a wide audience. The results reflect the exponential transformation that today’s grid is going...

  3. Applying the Taguchi method for optimized fabrication of bovine ...

    African Journals Online (AJOL)

    The objective of the present study was to optimize the fabrication of bovine serum albumin (BSA) nanoparticle by applying the Taguchi method with characterization of the nanoparticle bioproducts. BSA nanoparticles have been extensively studied in our previous works as suitable carrier for drug delivery, since they are ...

  4. Descent methods for convex optimization problems in Banach spaces

    Directory of Open Access Journals (Sweden)

    M. S. S. Ali

    2005-01-01

    Full Text Available We consider optimization problems in Banach spaces, whose cost functions are convex and smooth, but do not possess strengthened convexity properties. We propose a general class of iterative methods, which are based on combining descent and regularization approaches and provide strong convergence of iteration sequences to a solution of the initial problem.

  5. The Smoothed Monte Carlo Method in Robustness Optimization

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Olieman, N.J.

    2008-01-01

    The concept of robustness as the probability mass of a design-dependent set has been introduced in the literature. Optimization of robustness can be seen as finding the design that has the highest robustness. The reference method for estimating the robustness is the Monte Carlo (MC) simulation, and

  6. The Accuracy and Effectiveness of Search Method for Optimizing A ...

    African Journals Online (AJOL)

    This work discusses the accuracy and effectiveness of search method for optimizing a multivariable unimodal function using various updates. Keywords: Quasi-Newton's updates, Davidon- Fletcher powell updates, Powell Symmetric, Broyden's updates, Broyeden –Fletcher-Goldfarb-Shanno's updates ...

  7. Response surface method to optimize the low cost medium for ...

    African Journals Online (AJOL)

    A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

  8. Analysis and Prediction of Myristoylation Sites Using the mRMR Method, the IFS Method and an Extreme Learning Machine Algorithm.

    Science.gov (United States)

    Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong

    2017-01-01

    Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    Science.gov (United States)

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  10. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations

    Science.gov (United States)

    Baranwal, Vipul K.; Pandey, Ram K.

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems. PMID:27437484

  11. An Optimal Calibration Method for a MEMS Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Bin Fang

    2014-02-01

    Full Text Available An optimal calibration method for a micro-electro-mechanical inertial measurement unit (MIMU is presented in this paper. The accuracy of the MIMU is highly dependent on calibration to remove the deterministic errors of systematic errors, which also contain random errors. The overlapping Allan variance is applied to characterize the types of random error terms in the measurements. The calibration model includes package misalignment error, sensor-to-sensor misalignment error and bias, and a scale factor is built. The new concept of a calibration method, which includes a calibration scheme and a calibration algorithm, is proposed. The calibration scheme is designed by D-optimal and the calibration algorithm is deduced by a Kalman filter. In addition, the thermal calibration is investigated, as the bias and scale factor varied with temperature. The simulations and real tests verify the effectiveness of the proposed calibration method and show that it is better than the traditional method.

  12. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    Science.gov (United States)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example

  13. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  14. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent condition gkTdk≤-1-1/4θkgk2  θk>1/4 and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.

  15. Comparison of different statistical downscaling methods to estimate changes in hourly extreme precipitation using RCM projections from ENSEMBLES

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Gregersen, Ida Bülow; Rosbjerg, Dan

    2015-01-01

    Changes in extreme precipitation are expected to be one of the most important impacts of climate change in cities. Urban floods are mainly caused by short duration extreme events. Hence, robust information on changes in extreme precipitation at high-temporal resolution is required for the design...... of climate change adaptation measures. However, the quantification of these changes is challenging and subject to numerous uncertainties. This study assesses the changes and uncertainties in extreme precipitation at hourly scale over Denmark. It explores three statistical downscaling approaches: a delta...... change method for extreme events, a weather generator combined with a disaggregation method and a climate analogue method. All three methods rely on different assumptions and use different outputs from the regional climate models (RCMs). The results of the three methods point towards an increase...

  16. Method optimization for fecal sample collection and fecal DNA extraction.

    Science.gov (United States)

    Mathay, Conny; Hamot, Gael; Henry, Estelle; Georges, Laura; Bellora, Camille; Lebrun, Laura; de Witt, Brian; Ammerlaan, Wim; Buschart, Anna; Wilmes, Paul; Betsou, Fay

    2015-04-01

    This is the third in a series of publications presenting formal method validation for biospecimen processing in the context of accreditation in laboratories and biobanks. We report here optimization of a stool processing protocol validated for fitness-for-purpose in terms of downstream DNA-based analyses. Stool collection was initially optimized in terms of sample input quantity and supernatant volume using canine stool. Three DNA extraction methods (PerkinElmer MSM I®, Norgen Biotek All-In-One®, MoBio PowerMag®) and six collection container types were evaluated with human stool in terms of DNA quantity and quality, DNA yield, and its reproducibility by spectrophotometry, spectrofluorometry, and quantitative PCR, DNA purity, SPUD assay, and 16S rRNA gene sequence-based taxonomic signatures. The optimal MSM I protocol involves a 0.2 g stool sample and 1000 μL supernatant. The MSM I extraction was superior in terms of DNA quantity and quality when compared to the other two methods tested. Optimal results were obtained with plain Sarstedt tubes (without stabilizer, requiring immediate freezing and storage at -20°C or -80°C) and Genotek tubes (with stabilizer and RT storage) in terms of DNA yields (total, human, bacterial, and double-stranded) according to spectrophotometry and spectrofluorometry, with low yield variability and good DNA purity. No inhibitors were identified at 25 ng/μL. The protocol was reproducible in terms of DNA yield among different stool aliquots. We validated a stool collection method suitable for downstream DNA metagenomic analysis. DNA extraction with the MSM I method using Genotek tubes was considered optimal, with simple logistics in terms of collection and shipment and offers the possibility of automation. Laboratories and biobanks should ensure protocol conditions are systematically recorded in the scope of accreditation.

  17. Improvised purification methods for obtaining individual drinking water supply under war and extreme shortage conditions.

    Science.gov (United States)

    Kozlicic, A; Hadzic, A; Bevanda, H

    1994-01-01

    Supplying an adequate amount of drinking water to a population is a complex problem that becomes an extremely difficult task in war conditions. In this paper, several simple methods for obtaining individual supplies of drinking water by filtration of atmospheric water with common household items are reported. Samples of atmospheric water (rain and snow) were collected, filtered, and analyzed for bacteriological and chemical content. The ability of commonly available household materials (newspaper, filter paper, gauze, cotton, and white cotton cloth) to filter water from the environmental sources was compared. According to chemical and biological analysis, the best results were obtained by filtering melted snow from the ground through white cotton cloth. Atmospheric water collected during war or in extreme shortage conditions can be purified with simple improvised filtering techniques and, if chlorinated, used as an emergency potable water source.

  18. Peramalan Beban Listrik Jangka Pendek Menggunakan Optimally Pruned Extreme Learning Machine (OPELM pada Sistem Kelistrikan Jawa Timur

    Directory of Open Access Journals (Sweden)

    Januar Adi Perdana

    2012-09-01

    Full Text Available Peramalan beban listrik jangka pendek merupakan faktor yang sangat penting dalam perencanaan dan pengoperasian sistem tenaga listrik. Tujuan dari peramalan beban listrik adalah agar permintaan listrik dan penyediaan listrik dapat seimbang. Karakteristik beban di wilayah Jawa Timur sangat fluktuatif sehingga pada penelitian ini digunakan metode Optimally Pruned Extreme Learning Machine (OPELM untuk meramalkan beban listrik. Kelebihan OPELM ada pada learning speed yang cepat dan pemilihan model yang tepat meskipun datanya mempunyai pola non linier. Keakuratan metode OPELM dapat diketahui dengan menggunakan metode pembanding yaitu metode ELM. Kriteria keakuratan yang digunakan adalah MAPE. Hasil dari perbandingan kriteria keakuratan menunjukkan bahwa hasil peramalan OPELM lebih baik dari ELM. Error rata-rata hasil pengujian peramalan paling minimum menunjukkan MAPE sebesar 1,3579% terjadi pada peramalan hari Jumat, sementara pada hari yang sama dengan metode ELM menghasilkan MAPE sebesar 2,2179%.

  19. Multi-objective Optimization Method for Distribution System Configuration using Pareto Optimal Solution

    Science.gov (United States)

    Hayashi, Yasuhiro; Takano, Hirotaka; Matsuki, Junya; Nishikawa, Yuji

    Distribution network has huge number of configuration candidates because the network configuration is determined by state of many sectionalizing switches (opened or closed) installing in terms of keeping power quality, reliability and so on. Since feeder current and voltage depends on the network configuration, distribution loss, voltage imbalance and bank efficiency can be controlled by changing state of these switches. In addition, feeder current and voltage change by out put of distributed generators (DGs) such as photovoltaic generation system, wind turbine generation system and so on, connected to the feeder. Recently, total number of DGs connected to distribution network increases drastically. Therefore, many configuration candidates of the distribution network must be evaluated multiply from various viewpoints such as distribution loss, voltage imbalance, bank efficiency and so on, considering power supply from connected DGs. In this paper, the authors propose a multi-objective optimization method from three evaluation viewpoints ((1) distribution loss, (2) voltage imbalance and (3) bank efficiency) using pareto optimal solution. In the proposed method, after several high-ranking candidates with small distribution loss are extracted by combinatorial optimization method, each candidate are evaluated from the viewpoints of voltage imbalance and bank efficiency using pareto optimal solution, then loss minimum configuration is determined as the best configuration among these solutions. Numerical simulations are carried out for a real scale system model consists of 72 distribution feeders and 234 sectionalizing switches in order to examine the validity of the proposed method.

  20. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization.

    Science.gov (United States)

    Gao, Hao

    2016-04-07

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT.

  1. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    Science.gov (United States)

    Gao, Hao

    2016-04-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT.

  2. Motor imagery EEG classification with optimal subset of wavelet based common spatial pattern and kernel extreme learning machine.

    Science.gov (United States)

    Hyeong-Jun Park; Jongin Kim; Beomjun Min; Boreom Lee

    2017-07-01

    Performance of motor imagery based brain-computer interfaces (MI BCIs) greatly depends on how to extract the features. Various versions of filter-bank based common spatial pattern have been proposed and used in MI BCIs. Filter-bank based common spatial pattern has more number of features compared with original common spatial pattern. As the number of features increases, the MI BCIs using filter-bank based common spatial pattern can face overfitting problems. In this study, we used eigenvector centrality feature selection method, wavelet packet decomposition common spatial pattern, and kernel extreme learning machine to improve the performance of MI BCIs and avoid overfitting problems. Furthermore, the computational speed was improved by using kernel extreme learning machine.

  3. Invariant Imbedded T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    Science.gov (United States)

    Pelissier, Craig; Kuo, Kwo-Sen; Clune, Thomas; Adams, Ian; Munchak, Stephen

    2017-01-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM IITM+SOV software to the community under an open source license.

  4. Optimal pulse design in quantum control: a unified computational method.

    Science.gov (United States)

    Li, Jr-Shin; Ruths, Justin; Yu, Tsyr-Yan; Arthanari, Haribabu; Wagner, Gerhard

    2011-02-01

    Many key aspects of control of quantum systems involve manipulating a large quantum ensemble exhibiting variation in the value of parameters characterizing the system dynamics. Developing electromagnetic pulses to produce a desired evolution in the presence of such variation is a fundamental and challenging problem in this research area. We present such robust pulse designs as an optimal control problem of a continuum of bilinear systems with a common control function. We map this control problem of infinite dimension to a problem of polynomial approximation employing tools from geometric control theory. We then adopt this new notion and develop a unified computational method for optimal pulse design using ideas from pseudospectral approximations, by which a continuous-time optimal control problem of pulse design can be discretized to a constrained optimization problem with spectral accuracy. Furthermore, this is a highly flexible and efficient numerical method that requires low order of discretization and yields inherently smooth solutions. We demonstrate this method by designing effective broadband π/2 and π pulses with reduced rf energy and pulse duration, which show significant sensitivity enhancement at the edge of the spectrum over conventional pulses in 1D and 2D NMR spectroscopy experiments.

  5. Optimal Allocation of Power-Electronic Interfaced Wind Turbines Using a Genetic Algorithm - Monte Carlo Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe

    2010-01-01

    limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...

  6. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    Science.gov (United States)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has

  7. Nonlinear Rescaling and Proximal-Like Methods in Convex Optimization

    Science.gov (United States)

    Polyak, Roman; Teboulle, Marc

    1997-01-01

    The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth scaling function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear resealing algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.

  8. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  9. The construction of optimal stated choice experiments theory and methods

    CERN Document Server

    Street, Deborah J

    2007-01-01

    The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...

  10. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  11. A Study on the Selection of Optimal Probability Distributions for Analyzing of Extreme Precipitation Events over the Republic of Korea

    Science.gov (United States)

    Lee, Hansu; Choi, Youngeun

    2014-05-01

    This study determined the optimal statistical probability distributions to estimate maximum probability precipitation in the Republic of Korea and examined whether there were any distinct changes on distribution types and extreme precipitation characteristics. Generalized Pareto distribution, and three parameter Burr distribution were most selected distributions for annual maximum series in the Republic of Korea. Furthermore, in the seasonal basis, the most selected distributions was three parameter Dagum distribution for spring, three parameter Burr distribution for summer, generalized Pareto distribution for autumn, three parameter log logistic distribution, generalized Pareto distribution and log-Pearson type III distribution for winter. Maximum probability precipitation was derived from selected optimal probability distributions and compared with that from Ministry of Land, Transport and Maritime Affairs(MOLTMA). Maximum probability precipitation in this study was greater than that of MOLTMA as the duration time and return periods increased. This difference was statistically significant when apply Wilcoxon signed rank test. Because of different distributions, as the return period is longer, greater maximum probability precipitation value were estimated. Annual maximum series from 1973 to 2012 showed that the median was the highest in the south coastal region, but as a duration time was getting longer, Seoul, Gyeonggido, and Gangwondo had higher median values, which located in the central part of Korea. The months of annual maximum series occurrence were concentrated between June and September. Typhoons affected on annual maximum series occurrence in September. Seasonal maximum probability precipitation was greater in most of the south coastal region, and Seoul, Gyeonggido and Gangwondo had greater maximum probability precipitation in summer. Gangwondo had greater maximum probability precipitation in autumn while Ulleung and Daegwallyeong had a greater one in

  12. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or

  13. Regional frequency analysis of extreme rainfalls using partial L moments method

    Science.gov (United States)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  14. Extreme learning machines for regression based on V-matrix method.

    Science.gov (United States)

    Yang, Zhiyong; Zhang, Taohong; Lu, Jingcheng; Su, Yuan; Zhang, Dezheng; Duan, Yaowu

    2017-10-01

    This paper studies the joint effect of V-matrix, a recently proposed framework for statistical inferences, and extreme learning machine (ELM) on regression problems. First of all, a novel algorithm is proposed to efficiently evaluate the V-matrix. Secondly, a novel weighted ELM algorithm called V-ELM is proposed based on the explicit kernel mapping of ELM and the V-matrix method. Though V-matrix method could capture the geometrical structure of training data, it tends to assign a higher weight to instance with smaller input value. In order to avoid this bias, a novel method called VI-ELM is proposed by minimizing both the regression error and the V-matrix weighted error simultaneously. Finally, experiment results on 12 real world benchmark datasets show the effectiveness of our proposed methods.

  15. A method for unified optimization of systems and controllers

    DEFF Research Database (Denmark)

    Abildgaard, Ole

    1990-01-01

    A unified method for solving control system optimization problems is suggested. All system matrices are allowed to be functions of the design variables. The method makes use of an implementation of a sequential quadratic programming algorithm (NLPQL) for solution of general constrained nonlinear...... programming problems. It is shown how to compute the gradients of the objective function and the constraint functions imposing eigenvalue constraints. In an example it is demonstrated how the method can solve a high-dimensional problem, where the initial condition covariance assumption is used to ensure...

  16. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  17. Developing Automatic Multi-Objective Optimization Methods for Complex Actuators

    Directory of Open Access Journals (Sweden)

    CHIS, R.

    2017-11-01

    Full Text Available This paper presents the analysis and multiobjective optimization of a magnetic actuator. By varying just 8 parameters of the magnetic actuator’s model the design space grows to more than 6 million configurations. Much more, the 8 objectives that must be optimized are conflicting and generate a huge objectives space, too. To cope with this complexity, we use advanced heuristic methods for Automatic Design Space Exploration. FADSE tool is one Automatic Design Space Exploration framework including different state of the art multi-objective meta-heuristics for solving NP-hard problems, which we used for the analysis and optimization of the COMSOL and MATLAB model of the magnetic actuator. We show that using a state of the art genetic multi-objective algorithm, response surface modelling methods and some machine learning techniques, the timing complexity of the design space exploration can be reduced, while still taking into consideration objective constraints so that various Pareto optimal configurations can be found. Using our developed approach, we were able to decrease the simulation time by at least a factor of 10, compared to a run that does all the simulations, while keeping prediction errors to around 1%.

  18. Agrotransformation of Phytophthora nicotianae: a simplified and optimized method

    Directory of Open Access Journals (Sweden)

    Ronaldo José Durigan Dalio

    Full Text Available ABSTRACT Phytophthora nicotianae is a plant pathogen responsible for damaging crops and natural ecosystems worldwide. P. nicotianae is correlated with the diseases: citrus gummosis and citrus root rot, and the management of these diseases relies mainly on the certification of seedlings and eradication of infected trees. However, little is known about the infection strategies of P. nicotianae interacting with citrus plants, which rises up the need for examining its virulence at molecular levels. Here we show an optimized method to genetically manipulate P. nicotianae mycelium. We have transformed P. nicotianae with the expression cassette of fluorescence protein DsRed. The optimized AMT method generated relatively high transformation efficiency. It also shows advantages over the other methods since it is the simplest one, it does not require protoplasts or spores as targets, it is less expensive and it does not require specific equipment. Transformation with DsRed did not impair the physiology, reproduction or virulence of the pathogen. The optimized AMT method presented here is useful for rapid, cost-effective and reliable transformation of P. nicotianae with any gene of interest.

  19. Design of large Francis turbine using optimal methods

    Science.gov (United States)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  20. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  1. A Localization Method for Multistatic SAR Based on Convex Optimization.

    Science.gov (United States)

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  2. New displacement-based methods for optimal truss topology design

    Science.gov (United States)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  3. Time-dependent optimal heater control using finite difference method

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen Zhe; Heo, Kwang Su; Choi, Jun Hoo; Seol, Seoung Yun [Chonnam National Univ., Gwangju (Korea, Republic of)

    2008-07-01

    Thermoforming is one of the most versatile and economical process to produce polymer products. The drawback of thermoforming is difficult to control thickness of final products. Temperature distribution affects the thickness distribution of final products, but temperature difference between surface and center of sheet is difficult to decrease because of low thermal conductivity of ABS material. In order to decrease temperature difference between surface and center, heating profile must be expressed as exponential function form. In this study, Finite Difference Method was used to find out the coefficients of optimal heating profiles. Through investigation, the optimal results using Finite Difference Method show that temperature difference between surface and center of sheet can be remarkably minimized with satisfying temperature of forming window.

  4. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  5. The application of the dynamic programming method in investment optimization

    Directory of Open Access Journals (Sweden)

    Petković Nina

    2016-01-01

    Full Text Available This paper deals with the problem of investment in Measuring Transformers Factory in Zajecar and the application of the dynamic programming method as one of the methods used in business process optimization. Dynamic programming is a special case of nonlinear programming that is widely applicable to nonlinear systems in economics. Measuring Transformers Factory in Zajecar was founded in 1969. It manufactures electrical equipment, primarily low and medium voltage current measuring transformers, voltage transformers, bushings, etc. The company offers a wide range of products and for this paper's needs the company's management selected three products for each of which optimal investment costing was made. The purpose was to see which product would be the most profitable and thus proceed with the manufacturing and selling of that particular product or products.

  6. Survey of optimization methods for BIW lightweight design

    OpenAIRE

    Zheyun Wang

    2017-01-01

    The body lightweight is important to the vehicle design and development, which has become one of the main research subjects in vehicle industries and research institutes. This paper systematically expounds the background and significance of the lightweight design of automobile, and systematically expatiates the implementation method of the light weight of the vehicle from the fields of lightweight materials, body structural optimization design, molding technology and new connecting technologi...

  7. Experimental methods for the analysis of optimization algorithms

    CERN Document Server

    Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike

    2010-01-01

    In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe

  8. An Optimal Calibration Method for a MEMS Inertial Measurement Unit

    OpenAIRE

    Bin Fang; Wusheng Chou; Li Ding

    2014-01-01

    An optimal calibration method for a micro-electro-mechanical inertial measurement unit (MIMU) is presented in this paper. The accuracy of the MIMU is highly dependent on calibration to remove the deterministic errors of systematic errors, which also contain random errors. The overlapping Allan variance is applied to characterize the types of random error terms in the measurements. The calibration model includes package misalignment error, sensor-to-sensor misalignment error and bias, and a sc...

  9. New design method for valves internals, to optimize process

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Leonardo [PDVSA (Venezuela)

    2011-07-01

    In the heavy oil industry, various methods can be used to reduce the viscosity of oil, one of them being the injection of diluent. This method is commonly used in the Orinoco oil belt but it requires good control of the volume of diluent injected as well as the gas flow to optimize production; thus flow control valves need to be accurate. A new valve with a new method was designed with the characteristic of being very reliable and was then bench tested and compared with the other commercially available valves. Results showed better repeatability, accuracy and reliability with lower maintenance for the new method. The use of this valve provides significant savings while distributing the exact amount of fluids; up to date a less than 2% failure rate has been recorded in the field. The new method developed demonstrated impressive performance and PDVSA has decided to use it in mass.

  10. New Methods of Treatment for Trophic Lesions of the Lower Extremities in Patients with Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    S.V. Bolgarska

    2016-08-01

    Full Text Available Introduction. Complications in the form of trophic ulcers of the lower extremities are one of the serious consequences of diabetes mellitus (DM, as they often lead to severe health and social problems, up to high amputations. The aim of the study was the development and clinical testing of diagnostic and therapeutic algorithm for the comprehensive treatment of trophic ulcers of the lower extremities in patients with DM. Materials and methods. Here are presented the results of treatment of 63 patients (42 women and 21 men with neuropathic type of trophic lesions of the lower limbs or postoperative defects at the stage of granulation. Of them, 32 patients (study group received local intradermal injections of hyaluronic acid preparations and sodium succinate (Lacerta into the extracellular matrix. Patients of the comparison group were treated with hydrocolloid materials (hydrocoll, granuflex. The level of glycated hemoglobin, the degree of circulatory disorders (using ankle brachial index, before and after the test with a load and neuropathic disorders (on a scale for the evaluation of neurologic dysfunctions — NDS were assessed in patients. Results. The results of treatment were assessed by the rate of defect healing during 2 or more months. In the study group, 24 patients showed complete healing of the defect (75 %, while in the control group the healing was observed in 16 patients (51.6 %. During the year, relapses occurred in 22.2 % of cases in the study group, and in 46.9 % — in the control one (p < 0.05. Conclusion. The developed method of treatment using Lacerta allowed to increase the effectiveness of therapy, to speed recovery, to decrease a number of complications in patients with DM and trophic ulcers of the lower extremities.

  11. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  12. Optimization methods for pipeline transportation of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Borraz-Sanchez, Conrado

    2010-10-15

    Within three research projects on the optimization of natural gas transport in transmission pipeline systems, a number of various mathematical models, algorithms, and numerical experiments have been presented and discussed in this thesis. The proposed optimization methods are composed of NLP and MINLP models, as well as of exact and heuristic methods. In addition, the experimental analyses conducted on each project were devoted to gain insight into three major issues: 1) the assessment of the computability of the mathematical models, 2) the performance of the proposed optimization techniques, and 3) comparison of the proposed techniques with existing optimization algorithms and tools. Project 1 focused on minimizing the total fuel consumption incurred by compressor stations installed in a gas pipeline system. The project was mainly devoted to tackle large natural gas pipeline systems with cyclic structures. After conducting a painstaking study on the NLP model introduced in Section 4.3, three different methodologies were proposed to effectively overcome both the difficulties encountered in the steady-state flow model, namely the non-linearity and non-convexity, as well as the weaknesses found in previously suggested optimization approaches. As discussed in Chapter 4, the key to success in this project was to apply the strategic idea of discretizing the feasible operating domain of compressor stations, which in turn allowed the implementation of hybrid solution methods based on powerful optimization techniques such as DP, tabu search, and tree decomposition. The idea of working within a discretized space has been successfully applied since the liquid pipeline optimization conducted in the late 1960s by Jefferson, until the non-traditional optimization technique suggested by Carter in 1998. The computational experiments conducted on each proposed optimization method, coupled with comparisons with typical approaches found in the literature, indicated that a continual

  13. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  14. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    Science.gov (United States)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  15. A Novel Gravity Compensation Method for High Precision Free-INS Based on "Extreme Learning Machine".

    Science.gov (United States)

    Zhou, Xiao; Yang, Gongliu; Cai, Qingzhong; Wang, Jing

    2016-11-29

    In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method.

  16. A method to objectively optimize coral bleaching prediction techniques

    Science.gov (United States)

    van Hooidonk, R. J.; Huber, M.

    2007-12-01

    Thermally induced coral bleaching is a global threat to coral reef health. Methodologies, e.g. the Degree Heating Week technique, have been developed to predict bleaching induced by thermal stress by utilizing remotely sensed sea surface temperature (SST) observations. These techniques can be used as a management tool for Marine Protected Areas (MPA). Predictions are valuable to decision makers and stakeholders on weekly to monthly time scales and can be employed to build public awareness and support for mitigation. The bleaching problem is only expected to worsen because global warming poses a major threat to coral reef health. Indeed, predictive bleaching methods combined with climate model output have been used to forecast the global demise of coral reef ecosystems within coming decades due to climate change. Accuracy of these predictive techniques has not been quantitatively characterized despite the critical role they play. Assessments have typically been limited, qualitative or anecdotal, or more frequently they are simply unpublished. Quantitative accuracy assessment, using well established methods and skill scores often used in meteorology and medical sciences, will enable objective optimization of existing predictive techniques. To accomplish this, we will use existing remotely sensed data sets of sea surface temperature (AVHRR and TMI), and predictive values from techniques such as the Degree Heating Week method. We will compare these predictive values with observations of coral reef health and calculate applicable skill scores (Peirce Skill Score, Hit Rate and False Alarm Rate). We will (a) quantitatively evaluate the accuracy of existing coral reef bleaching predictive methods against state-of- the-art reef health databases, and (b) present a technique that will objectively optimize the predictive method for any given location. We will illustrate this optimization technique for reefs located in Puerto Rico and the US Virgin Islands.

  17. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Wuli; Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh.

  18. Optimization and modification of the method for detection of rhamnolipids

    Directory of Open Access Journals (Sweden)

    Takeshi Tabuchi

    2015-10-01

    Full Text Available Use of biosurfactants in bioremediation, facilitates and accelerates microbial degradation of hydrocarbons. CTAB/MB agar method created by Siegmund & Wagner for screening of rhamnolipids (RL producing strains, has been widely used but has not improved significantly for more than 20 years. To optimize the technique as a quantitative method, CTAB/MB agar plates were made and different variables were tested, like incubation time, cooling, CTAB concentration, methylene blue presence, wells diameter and inocula volume. Furthermore, a new method for RL detection within halos was developed: precipitation of RL with HCl, allows the formation a new halos pattern, easier to observe and to measure. This research reaffirm that this method is not totally suitable for a fine quantitative analysis, because of the difficulty to accurately correlate RL concentration and the area of the halos. RL diffusion does not seem to have a simple behavior and there are a lot of factors that affect RL migration rate.

  19. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    (7). In this paper a reliability-based shape optimization problem is formulated with the total expected cost as objective function and some requirements for the reliability measures (element or systems reliability measures) as constraints, see section 2. As design variables sizing variables......Application of first-order reliability methods FORM (see Madsen, Krenk & Lind [8)) in structural design problems has attracted growing interest in recent years, see e.g. Frangopol [4), Murotsu, Kishi, Okada, Yonezawa & Taguchi [9) and Sørensen [14). In probabilistically based optimal design...... stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...

  20. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent...... by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual...

  1. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    The present paper considers the sequential decision optimization problem. This is an important class of decision problems in engineering. Important examples include decision problems on the quality control of manufactured products and engineering components, timing of the implementation of climate...... change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  2. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  3. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines

    Science.gov (United States)

    Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi

    2016-01-01

    The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures. PMID:27023563

  4. Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines.

    Science.gov (United States)

    Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi

    2016-03-26

    The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures.

  5. Optimization of the design of extremely thin absorber solar cells based on electrodeposited ZnO nanowires.

    Science.gov (United States)

    Lévy-Clément, Claude; Elias, Jamil

    2013-07-22

    The properties of the components of ZnO/CdSe/CuSCN extremely thin absorber (ETA) solar cells based on electrodeposited ZnO nanowires (NWs) were investigated. The goal was to study the influence of their morphology on the characteristics of the solar cells. To increase the energy conversion efficiency of the solar cell, it was generally proposed to increase the roughness factor of the ZnO NW arrays (i.e. to increase the NW length) with the purpose of decreasing the absorber thickness, improving the light scattering, and consequently the light absorption in the ZnO/CdSe NW arrays. However, this strategy increased the recombination centers, which affected the efficiency of the solar cell. We developed another strategy that acts on the optical configuration of the solar cells by increasing the diameter of the ZnO NW (from 100 to 330 nm) while maintaining a low roughness factor. We observed that the scattering of the ZnO NW arrays occurred over a large wavelength range and extended closer to the CdSe absorber bandgap, and this led to an enhancement in the effective absorption of the ZnO/CdSe NW arrays and an increase in the solar cell characteristics. We found that the thicknesses of CuSCN above the ZnO/CdSe NW tips and the CdSe coating layer were optimized at 1.5 μm and 30 nm, respectively. Optimized ZnO/CdSe/CuSCN solar cells exhibiting 3.2% solar energy conversion efficiency were obtained by using 230 nm diameter ZnO NWs. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Incoherent Dictionary Learning Method Based on Unit Norm Tight Frame and Manifold Optimization for Sparse Representation

    Directory of Open Access Journals (Sweden)

    HongZhong Tang

    2016-01-01

    Full Text Available Optimizing the mutual coherence of a learned dictionary plays an important role in sparse representation and compressed sensing. In this paper, a efficient framework is developed to learn an incoherent dictionary for sparse representation. In particular, the coherence of a previous dictionary (or Gram matrix is reduced sequentially by finding a new dictionary (or Gram matrix, which is closest to the reference unit norm tight frame of the previous dictionary (or Gram matrix. The optimization problem can be solved by restricting the tightness and coherence alternately at each iteration of the algorithm. The significant and different aspect of our proposed framework is that the learned dictionary can approximate an equiangular tight frame. Furthermore, manifold optimization is used to avoid the degeneracy of sparse representation while only reducing the coherence of the learned dictionary. This can be performed after the dictionary update process rather than during the dictionary update process. Experiments on synthetic and real audio data show that our proposed methods give notable improvements in lower coherence, have faster running times, and are extremely robust compared to several existing methods.

  7. Applying systems biology methods to the study of human physiology in extreme environments.

    Science.gov (United States)

    Edwards, Lindsay M; Thiele, Ines

    2013-03-22

    Systems biology is defined in this review as 'an iterative process of computational model building and experimental model revision with the aim of understanding or simulating complex biological systems'. We propose that, in practice, systems biology rests on three pillars: computation, the omics disciplines and repeated experimental perturbation of the system of interest. The number of ethical and physiologically relevant perturbations that can be used in experiments on healthy humans is extremely limited and principally comprises exercise, nutrition, infusions (e.g. Intralipid), some drugs and altered environment. Thus, we argue that systems biology and environmental physiology are natural symbionts for those interested in a system-level understanding of human biology. However, despite excellent progress in high-altitude genetics and several proteomics studies, systems biology research into human adaptation to extreme environments is in its infancy. A brief description and overview of systems biology in its current guise is given, followed by a mini review of computational methods used for modelling biological systems. Special attention is given to high-altitude research, metabolic network reconstruction and constraint-based modelling.

  8. Applying systems biology methods to the study of human physiology in extreme environments

    Science.gov (United States)

    2013-01-01

    Systems biology is defined in this review as ‘an iterative process of computational model building and experimental model revision with the aim of understanding or simulating complex biological systems’. We propose that, in practice, systems biology rests on three pillars: computation, the omics disciplines and repeated experimental perturbation of the system of interest. The number of ethical and physiologically relevant perturbations that can be used in experiments on healthy humans is extremely limited and principally comprises exercise, nutrition, infusions (e.g. Intralipid), some drugs and altered environment. Thus, we argue that systems biology and environmental physiology are natural symbionts for those interested in a system-level understanding of human biology. However, despite excellent progress in high-altitude genetics and several proteomics studies, systems biology research into human adaptation to extreme environments is in its infancy. A brief description and overview of systems biology in its current guise is given, followed by a mini review of computational methods used for modelling biological systems. Special attention is given to high-altitude research, metabolic network reconstruction and constraint-based modelling. PMID:23849719

  9. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  10. Optimizing ETL by a Two-level Data Staging Method

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Iftikhar, Nadeem; Nielsen, Per Sieverts

    2016-01-01

    In data warehousing, the data from source systems are populated into a central data warehouse (DW) through extraction, transformation and loading (ETL). The standard ETL approach usually uses sequential jobs to process the data with dependencies, such as dimension and fact data. It is a non......-trivial task to process the so-called early-/late-arriving data, which arrive out of order. This paper proposes a two-level data staging area method to optimize ETL. The proposed method is an all-in-one solution that supports processing different types of data from operational systems, including early......-/late-arriving data, and fast-/slowly-changing data. The introduced additional staging area decouples loading process from data extraction and transformation, which improves ETL flexibility and minimizes intervention to the data warehouse. This paper evaluates the proposed method empirically, which shows...

  11. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  12. Spectral Analysis of Large Finite Element Problems by Optimization Methods

    Directory of Open Access Journals (Sweden)

    Luca Bergamaschi

    1994-01-01

    Full Text Available Recently an efficient method for the solution of the partial symmetric eigenproblem (DACG, deflated-accelerated conjugate gradient was developed, based on the conjugate gradient (CG minimization of successive Rayleigh quotients over deflated subspaces of decreasing size. In this article four different choices of the coefficient βk required at each DACG iteration for the computation of the new search direction Pk are discussed. The “optimal” choice is the one that yields the same asymptotic convergence rate as the CG scheme applied to the solution of linear systems. Numerical results point out that the optimal βk leads to a very cost effective algorithm in terms of CPU time in all the sample problems presented. Various preconditioners are also analyzed. It is found that DACG using the optimal βk and (LLT−1 as a preconditioner, L being the incomplete Cholesky factor of A, proves a very promising method for the partial eigensolution. It appears to be superior to the Lanczos method in the evaluation of the 40 leftmost eigenpairs of five finite element problems, and particularly for the largest problem, with size equal to 4560, for which the speed gain turns out to fall between 2.5 and 6.0, depending on the eigenpair level.

  13. Noniterative convex optimization methods for network component analysis.

    Science.gov (United States)

    Jacklin, Neil; Ding, Zhi; Chen, Wei; Chang, Chunqi

    2012-01-01

    This work studies the reconstruction of gene regulatory networks by the means of network component analysis (NCA). We will expound a family of convex optimization-based methods for estimating the transcription factor control strengths and the transcription factor activities (TFAs). The approach taken in this work is to decompose the problem into a network connectivity strength estimation phase and a transcription factor activity estimation phase. In the control strength estimation phase, we formulate a new subspace-based method incorporating a choice of multiple error metrics. For the source estimation phase we propose a total least squares (TLS) formulation that generalizes many existing methods. Both estimation procedures are noniterative and yield the optimal estimates according to various proposed error metrics. We test the performance of the proposed algorithms on simulated data and experimental gene expression data for the yeast Saccharomyces cerevisiae and demonstrate that the proposed algorithms have superior effectiveness in comparison with both Bayesian Decomposition (BD) and our previous FastNCA approach, while the computational complexity is still orders of magnitude less than BD.

  14. Performance enhancement of a pump impeller using optimal design method

    Science.gov (United States)

    Jeon, Seok-Yun; Kim, Chul-Kyu; Lee, Sang-Moon; Yoon, Joon-Yong; Jang, Choon-Man

    2017-04-01

    This paper presents the performance evaluation of a regenerative pump to increase its efficiency using optimal design method. Two design parameters which define the shape of the pump impeller, are introduced and analyzed. Pump performance is evaluated by numerical simulation and design of experiments(DOE). To analyze three-dimensional flow field in the pump, general analysis code, CFX, is used in the present work. Shear stress turbulence model is employed to estimate the eddy viscosity. Experimental apparatus with an open-loop facility is set up for measuring the pump performance. Pump performance, efficiency and pressure, obtained from numerical simulation are validated by comparison with the results of experiments. Throughout the shape optimization of the pump impeller at the operating flow condition, the pump efficiency is successfully increased by 3 percent compared to the reference pump. It is noted that the pressure increase of the optimum pump is mainly caused by higher momentum force generated inside blade passage due to the optimal blade shape. Comparisons of pump internal flow on the reference and optimum pump are also investigated and discussed in detail.

  15. Methods to optimize myxobacterial fermentations using off-gas analysis

    Directory of Open Access Journals (Sweden)

    Hüttel Stephan

    2012-05-01

    Full Text Available Abstract Background The influence of carbon dioxide and oxygen on microbial secondary metabolite producers and the maintenance of these two parameters at optimal levels have been studied extensively. Nevertheless, most studies have focussed on their influence on specific product formation and condition optimization of established processes. Considerably less attention has been paid to the influence of reduced or elevated carbon dioxide and oxygen levels on the overall metabolite profiles of the investigated organisms. The synergistic action of both gases has garnered even less attention. Results We show that the composition of the gas phase is highly important for the production of different metabolites and present a simple approach that enables the maintenance of defined concentrations of both O2 and CO2 during bioprocesses over broad concentration ranges with a minimal instrumental setup by using endogenously produced CO2. The metabolite profiles of a myxobacterium belonging to the genus Chondromyces grown under various concentrations of CO2 and O2 showed considerable differences. Production of two unknown, highly cytotoxic compounds and one antimicrobial substance was found to increase depending on the gas composition. In addition, the observation of CO2 and O2 in the exhaust gas allowed optimization and control of production processes. Conclusions Myxobacteria are becoming increasingly important due to their potential for bioactive secondary metabolite production. Our studies show that the influence of different gas partial pressures should not be underestimated during screening processes for novel compounds and that our described method provides a simple tool to investigate this question.

  16. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    Science.gov (United States)

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  17. An Optimal Method for Developing Global Supply Chain Management System

    Directory of Open Access Journals (Sweden)

    Hao-Chun Lu

    2013-01-01

    Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.

  18. A Clinically Relevant Method of Analyzing Continuous Change in Robotic Upper Extremity Chronic Stroke Rehabilitation.

    Science.gov (United States)

    Massie, Crystal L; Du, Yue; Conroy, Susan S; Krebs, H Igo; Wittenberg, George F; Bever, Christopher T; Whitall, Jill

    2016-09-01

    Robots designed for rehabilitation of the upper extremity after stroke facilitate high rates of repetition during practice of movements and record precise kinematic data, providing a method to investigate motor recovery profiles over time. To determine how motor recovery profiles during robotic interventions provide insight into improving clinical gains. A convenience sample (n = 22), from a larger randomized control trial, was taken of chronic stroke participants completing 12 sessions of arm therapy. One group received 60 minutes of robotic therapy (Robot only) and the other group received 45 minutes on the robot plus 15 minutes of translation-to-task practice (Robot + TTT). Movement time was assessed using the robot without powered assistance. Analyses (ANOVA, random coefficient modeling [RCM] with 2-term exponential function) were completed to investigate changes across the intervention, between sessions, and within a session. Significant improvement (P robotic interventions. © The Author(s) 2015.

  19. A novel multiple instance learning method based on extreme learning machine.

    Science.gov (United States)

    Wang, Jie; Cai, Liangjian; Peng, Jinzhu; Jia, Yuheng

    2015-01-01

    Since real-world data sets usually contain large instances, it is meaningful to develop efficient and effective multiple instance learning (MIL) algorithm. As a learning paradigm, MIL is different from traditional supervised learning that handles the classification of bags comprising unlabeled instances. In this paper, a novel efficient method based on extreme learning machine (ELM) is proposed to address MIL problem. First, the most qualified instance is selected in each bag through a single hidden layer feedforward network (SLFN) whose input and output weights are both initialed randomly, and the single selected instance is used to represent every bag. Second, the modified ELM model is trained by using the selected instances to update the output weights. Experiments on several benchmark data sets and multiple instance regression data sets show that the ELM-MIL achieves good performance; moreover, it runs several times or even hundreds of times faster than other similar MIL algorithms.

  20. Optimal Control for Bufferbloat Queue Management Using Indirect Method with Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Amr Radwan

    2016-01-01

    Full Text Available Because memory buffers become larger and cheaper, they have been put into network devices to reduce the number of loss packets and improve network performance. However, the consequences of large buffers are long queues at network bottlenecks and throughput saturation, which has been recently noticed in research community as bufferbloat phenomenon. To address such issues, in this article, we design a forward-backward optimal control queue algorithm based on an indirect approach with parametric optimization. The cost function which we want to minimize represents a trade-off between queue length and packet loss rate performance. Through the integration of an indirect approach with parametric optimization, our proposal has advantages of scalability and accuracy compared to direct approaches, while still maintaining good throughput and shorter queue length than several existing queue management algorithms. All numerical analysis, simulation in ns-2, and experiment results are provided to solidify the efficiency of our proposal. In detailed comparisons to other conventional algorithms, the proposed procedure can run much faster than direct collocation methods while maintaining a desired short queue (≈40 packets in simulation and 80 (ms in experiment test.

  1. An Invariant-Preserving ALE Method for Solids under Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sambasivan, Shiv Kumar [Los Alamos National Laboratory; Christon, Mark A [Los Alamos National Laboratory

    2012-07-17

    We are proposing a fundamentally new approach to ALE methods for solids undergoing large deformation due to extreme loading conditions. Our approach is based on a physically-motivated and mathematically rigorous construction of the underlying Lagrangian method, vector/tensor reconstruction, remapping, and interface reconstruction. It is transformational because it deviates dramatically from traditionally accepted ALE methods and provides the following set of unique attributes: (1) a three-dimensional, finite volume, cell-centered ALE framework with advanced hypo-/hyper-elasto-plastic constitutive theories for solids; (2) a new physically and mathematically consistent reconstruction method for vector/tensor fields; (3) advanced invariant-preserving remapping algorithm for vector/tensor quantities; (4) moment-of-fluid (MoF) interface reconstruction technique for multi-material problems with solids undergoing large deformations. This work brings together many new concepts, that in combination with emergent cell-centered Lagrangian hydrodynamics methods will produce a cutting-edge ALE capability and define a new state-of-the-art. Many ideas in this work are new, completely unexplored, and hence high risk. The proposed research and the resulting algorithms will be of immediate use in Eulerian, Lagrangian and ALE codes under the ASC program at the lab. In addition, the research on invariant preserving reconstruction/remap of tensor quantities is of direct interest to ongoing CASL and climate modeling efforts at LANL. The application space impacted by this work includes Inertial Confinement Fusion (ICF), Z-pinch, munition-target interactions, geological impact dynamics, shock processing of powders and shaped charges. The ALE framework will also provide a suitable test-bed for rapid development and assessment of hypo-/hyper-elasto-plastic constitutive theories. Today, there are no invariant-preserving ALE algorithms for treating solids with large deformations. Therefore

  2. Enhanced Multi-Objective Energy Optimization by a Signaling Method

    Directory of Open Access Journals (Sweden)

    João Soares

    2016-10-01

    Full Text Available In this paper three metaheuristics are used to solve a smart grid multi-objective energy management problem with conflictive design: how to maximize profits and minimize carbon dioxide (CO2 emissions, and the results compared. The metaheuristics implemented are: weighted particle swarm optimization (W-PSO, multi-objective particle swarm optimization (MOPSO and non-dominated sorting genetic algorithm II (NSGA-II. The performance of these methods with the use of multi-dimensional signaling is also compared with this technique, which has previously been shown to boost metaheuristics performance for single-objective problems. Hence, multi-dimensional signaling is adapted and implemented here for the proposed multi-objective problem. In addition, parallel computing is used to mitigate the methods’ computational execution time. To validate the proposed techniques, a realistic case study for a chosen area of the northern region of Portugal is considered, namely part of Vila Real distribution grid (233-bus. It is assumed that this grid is managed by an energy aggregator entity, with reasonable amount of electric vehicles (EVs, several distributed generation (DG, customers with demand response (DR contracts and energy storage systems (ESS. The considered case study characteristics took into account several reported research works with projections for 2020 and 2050. The findings strongly suggest that the signaling method clearly improves the results and the Pareto front region quality.

  3. Survey of optimization methods for BIW lightweight design

    Directory of Open Access Journals (Sweden)

    Zheyun Wang

    2017-01-01

    Full Text Available The body lightweight is important to the vehicle design and development, which has become one of the main research subjects in vehicle industries and research institutes. This paper systematically expounds the background and significance of the lightweight design of automobile, and systematically expatiates the implementation method of the light weight of the vehicle from the fields of lightweight materials, body structural optimization design, molding technology and new connecting technologies. The present situation of domestic and foreign research on lightweight design is introduced at the same time. On this basis, this paper establishes the process of multidisciplinary lightweight design of vehicles, and applies a variety of simulation and modeling tools into this process. The experiment results show the effectiveness and efficiency of the method

  4. LEGAL FORM OF BUSINESS ORGANIZATION - A METHOD OF FISCAL OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    POPA MIHAELA

    2013-08-01

    Full Text Available A fiscal optimization method of companies is rendered by their ability to choose among the various methods tolegally organize performance set out by the Romanian legislation. In this respect, a taxpayer may choose to organizetheir activities in their capacity as self-employed persons, sole proprietorships, family owned companies, free lancers,beneficiaries of copyright revenues, limited liability companies, small enterprises, joint stock companies. When ataxpayer may chose the most favorable variant to their legal status from among the ones provided by the legislation,the aim of this research paper is to show the fiscal impact generated by a natural person’s choices to raise incomesfrom activities performed as an employee, as an authorized natural person or in a trading company, namely a smallenterprise. The outcome emphasizes the fact that from the perspective of the net income raised by a natural person,their most favorable business organization form is the choice to act as a small enterprise.

  5. Methods and Model Dependency of Extreme Event Attribution: The 2015 European Drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Vautard, Robert; van Oldenborgh, Geert J.; Wilcox, Laura; Seneviratne, Sonia I.

    2017-10-01

    Science on the role of anthropogenic influence on extreme weather events, such as heatwaves or droughts, has evolved rapidly in the past years. The approach of "event attribution" compares the occurrence-probability of an event in the present, factual climate with its probability in a hypothetical, counterfactual climate without human-induced climate change. Several methods can be used for event attribution, based on climate model simulations and observations, and usually researchers only assess a subset of methods and data sources. Here, we explore the role of methodological choices for the attribution of the 2015 meteorological summer drought in Europe. We present contradicting conclusions on the relevance of human influence as a function of the chosen data source and event attribution methodology. Assessments using the maximum number of models and counterfactual climates with pre-industrial greenhouse gas concentrations point to an enhanced drought risk in Europe. However, other evaluations show contradictory evidence. These results highlight the need for a multi-model and multi-method framework in event attribution research, especially for events with a low signal-to-noise ratio and high model dependency such as regional droughts.

  6. A method of batch-purifying microalgae with multiple antibiotics at extremely high concentrations

    Science.gov (United States)

    Han, Jichang; Wang, Song; Zhang, Lin; Yang, Guanpin; Zhao, Lu; Pan, Kehou

    2016-01-01

    Axenic microalgal strains are highly valued in diverse microalgal studies and applications. Antibiotics, alone or in combination, are often used to avoid bacterial contamination during microalgal isolation and culture. In our preliminary trials, we found that many microalgae ceased growing in antibiotics at extremely high concentrations but could resume growth quickly when returned to an antibiotics-free liquid medium and formed colonies when spread on a solid medium. We developed a simple and highly efficient method of obtaining axenic microalgal cultures based on this observation. First, microalgal strains of different species or strains were treated with a mixture of ampicillin, gentamycin sulfate, kanamycin, neomycin and streptomycin (each at a concentration of 600 mg/L) for 3 days; they were then transferred to antibiotics-free medium for 5 days; and finally they were spread on solid f/2 media to allow algal colonies to form. With this method, five strains of Nannochloropsis sp. (Eustigmatophyceae), two strains of Cylindrotheca sp. (Bacillariophyceae), two strains of Tetraselmis sp. (Chlorodendrophyceae) and one strain of Amphikrikos sp. (Trebouxiophyceae) were purified successfully. The method shows promise for batch-purifying microalgal cultures.

  7. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  8. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...... and frames are implemented. The developed procedures allow for the exploration of new territories in optimization of architectural structures, and offer new methodological strategies for bridging conceptual gaps between optimization and architectural practice....

  9. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    in topological optimization: Interactive control and continuous visualization; embedding flexible voids within the design space; consideration of distinct tension / compression properties; and optimization of dual material systems. In extension, optimization procedures for skeletal structures such as trusses...... and frames are implemented. The developed procedures allow for the exploration of new territories in optimization of architectural structures, and offer new methodological strategies for bridging conceptual gaps between optimization and architectural practice....

  10. Alternative method of treating prolonged wound defects of trunk and extremities

    Directory of Open Access Journals (Sweden)

    E. V. Ponomarenko

    2017-08-01

    Full Text Available The aim of the study is to improve the treatment results in patients with prolonged wound defects on the trunk and extremities by using alternative methods. Materials and methods of research: 75 patients with neurotrophic disorders aged 19–76 years were treated. Of the total number 25 (33.3 % of patients were treated according to the methodology having been developed in the clinic. Results and discussion: In 25 (33.3 % cases of a neurotrophic ulcerative defect, skin regeneration course was prescribed for 2 to 6 weeks. We have had positive results (complete defects healing in all the cases. The clinical experience of using hyaluronic acid preparation has been scientifically substantiated by the complex pathomorphological studies of skin biopsy material (histological, histochemical, immunohistochemical techniques ones using monoclonal antibodies Rb a-Hu Collagen I, Clone RAHC11 and Rb a-Hu Collagen III, Clone RAHC33 (Imtek, Russian Federation to collagen I and III types. Conclusions: The choice of corrective intervention method and the closure of the defect depended on the size, depth of the wound and the functional characteristics of the site of the injury. The new method of treatment of neurotrophic ulcers expands the prospects for treatment of patients with defects in the integumentary tissues. At the pathomorphological examination, the signs of healing with hyperproliferative processes were revealed in the epidermis, hyperkeratosis, parakeratosis, and excessive accumulation of collagen type I that characterized pathological healing and was often determined in the epidermis in chronic ulcers. Differential approach to selecting the method of closing wound surfaces makes it possible to achieve positive results in 98.1 % of cases.

  11. MST-GEN: An Efficient Parameter Selection Method for One-Class Extreme Learning Machine.

    Science.gov (United States)

    Wang, Siqi; Liu, Qiang; Zhu, En; Yin, Jianping; Zhao, Wentao

    2017-10-01

    One-class classification (OCC) models a set of target data from one class to detect outliers. OCC approaches like one-class support vector machine (OCSVM) and support vector data description (SVDD) have wide practical applications. Recently, one-class extreme learning machine (OCELM), which inherits the fast learning speed of original ELM and achieves equivalent or higher data description performance than OCSVM and SVDD, is proposed as a promising alternative. However, OCELM faces the same thorny parameter selection problem as OCSVM and SVDD. It significantly affects the performance of OCELM and remains under-explored. This paper proposes minimal spanning tree (MST)-GEN, an automatic way to select proper parameters for OCELM. Specifically, we first build a n -round MST to model the structure and distribution of the given target set. With information from n -round MST, a controllable number of pseudo outliers are generated by edge pattern detection and a novel "repelling" process, which readily overcomes two fundamental problems in previous outlier generation methods: where and how many pseudo outliers should be generated. Unlike previous methods that only generate pseudo outliers, we further exploit n -round MST to generate pseudo target data, so as to avoid the time-consuming cross-validation process and accelerate the parameter selection. Extensive experiments on various datasets suggest that the proposed method can select parameters for OCELM in a highly efficient and accurate manner when compared with existing methods, which enables OCELM to achieve better OCC performance in OCC applications. Furthermore, our experiments show that MST-GEN can also be favorably applied to other prevalent OCC methods like OCSVM and SVDD.

  12. Topology optimization using bi-directional evolutionary structural optimization based on the element-free Galerkin method

    Science.gov (United States)

    Shobeiri, Vahid

    2016-03-01

    In this article, the bi-directional evolutionary structural optimization (BESO) method based on the element-free Galerkin (EFG) method is presented for topology optimization of continuum structures. The mathematical formulation of the topology optimization is developed considering the nodal strain energy as the design variable and the minimization of compliance as the objective function. The EFG method is used to derive the shape functions using the moving least squares approximation. The essential boundary conditions are enforced by the method of Lagrange multipliers. Several topology optimization problems are presented to show the effectiveness of the proposed method. Many issues related to topology optimization of continuum structures, such as chequerboard patterns and mesh dependency, are studied in the examples.

  13. Investigation on multi-objective performance optimization algorithm application of fan based on response surface method and entropy method

    Science.gov (United States)

    Zhang, Li; Wu, Kexin; Liu, Yang

    2017-12-01

    A multi-objective performance optimization method is proposed, and the problem that single structural parameters of small fan balance the optimization between the static characteristics and the aerodynamic noise is solved. In this method, three structural parameters are selected as the optimization variables. Besides, the static pressure efficiency and the aerodynamic noise of the fan are regarded as the multi-objective performance. Furthermore, the response surface method and the entropy method are used to establish the optimization function between the optimization variables and the multi-objective performances. Finally, the optimized model is found when the optimization function reaches its maximum value. Experimental data shows that the optimized model not only enhances the static characteristics of the fan but also obviously reduces the noise. The results of the study will provide some reference for the optimization of multi-objective performance of other types of rotating machinery.

  14. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  15. Optimized t-expansion method for the Rabi Hamiltonian

    Energy Technology Data Exchange (ETDEWEB)

    Travenec, Igor, E-mail: fyzitrav@savba.sk [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia); Samaj, Ladislav [Institute of Physics, Slovak Academy of Sciences, Dubravska cesta 9, 845 11 Bratislava (Slovakia)

    2011-10-31

    A polemic arose recently about the applicability of the t-expansion method to the calculation of the ground state energy E{sub 0} of the Rabi model. For specific choices of the trial function and very large number of involved connected moments, the t-expansion results are rather poor and exhibit considerable oscillations. In this Letter, we formulate the t-expansion method for trial functions containing two free parameters which capture two exactly solvable limits of the Rabi Hamiltonian. At each order of the t-series, E{sub 0} is assumed to be stationary with respect to the free parameters. A high accuracy of E{sub 0} estimates is achieved for small numbers (5 or 6) of involved connected moments, the relative error being smaller than 10{sup -4} (0.01%) within the whole parameter space of the Rabi Hamiltonian. A special symmetrization of the trial function enables us to calculate also the first excited energy E{sub 1}, with the relative error smaller than 10{sup -2} (1%). -- Highlights: → We study the ground state energy of the Rabi Hamiltonian. → We use the t-expansion method with an optimized trial function. → High accuracy of estimates is achieved, the relative error being smaller than 0.01%. → The calculation of the first excited state energy is made. The method has a general applicability.

  16. PARAMETRIC OPTIMIZATION IN ELECTROCHEMICAL MACHINING USING UTILITY BASED TAGUCHI METHOD

    Directory of Open Access Journals (Sweden)

    SADINENI RAMA RAO

    2015-01-01

    Full Text Available The present work deals the application of Taguchi method with utility concept to optimize the machining parameters with multiple characteristics in electrochemical machining (ECM of Al/B4C composites. L27 orthogonal array was chosen for the experiments. METATECH ECM setup is used to conduct the experiments. The ECM machining parameters namely applied voltage, electrolyte concentration, electrode feed rate and percentage of reinforcement are optimized based on multiple responses, i.e., material removal rate, surface roughness and radial over cut. The optimum machining parameters are calculated by using utility concept and results are compared with ANOVA. The results show that the feed rate is the most influencing parameter which affects the multiple machining characteristics simultaneously. The optimum parametric combination to maximize the material removal rate and to minimize surface roughness and radial over cut simultaneously are, applied voltage 16 V, feed rate 1.0 mm/min, electrolyte concentration 30 g/L and reinforcement content 5 wt%. Experimental results show that the responses in electrochemical machining process can be improved through this approach.

  17. Process parameter optimization for fly ash brick by Taguchi method

    Directory of Open Access Journals (Sweden)

    Prabir Kumar Chaulia

    2008-06-01

    Full Text Available This paper presents the results of an experimental investigation carried out to optimize the mix proportions of the fly ash brick by Taguchi method of parameter design. The experiments have been designed using an L9 orthogonal array with four factors and three levels each. Small quantity of cement has been mixed as binding materials. Both cement and the fly ash used are indicated as binding material and water binder ratio has been considered as one of the control factors. So the effects of water/binder ratio, fly ash, coarse sand, and stone dust on the performance characteristic are analyzed using signal-to-noise ratios and mean response data. According to the results, water/binder ratio and stone dust play the significant role on the compressive strength of the brick. Furthermore, the estimated optimum values of the process parameters are corresponding to water/binder ratio of 0.4, fly ash of 39%, coarse sand of 24%, and stone dust of 30%. The mean value of optimal strength is predicted as 166.22 kg.cm-2 with a tolerance of ± 10.97 kg.cm-2. Confirmatory experimental result obtained for the optimum conditions is 160.17 kg.cm-2.

  18. Buried Object Detection Method Using Optimum Frequency Range in Extremely Shallow Underground

    Science.gov (United States)

    Sugimoto, Tsuneyoshi; Abe, Touma

    2011-07-01

    We propose a new detection method for buried objects using the optimum frequency response range of the corresponding vibration velocity. Flat speakers and a scanning laser Doppler vibrometer (SLDV) are used for noncontact acoustic imaging in the extremely shallow underground. The exploration depth depends on the sound pressure, but it is usually less than 10 cm. Styrofoam, wood (silver fir), and acrylic boards of the same size, different size styrofoam boards, a hollow toy duck, a hollow plastic container, a plastic container filled with sand, a hollow steel can and an unglazed pot are used as buried objects which are buried in sand to about 2 cm depth. The imaging procedure of buried objects using the optimum frequency range is given below. First, the standardized difference from the average vibration velocity is calculated for all scan points. Next, using this result, underground images are made using a constant frequency width to search for the frequency response range of the buried object. After choosing an approximate frequency response range, the difference between the average vibration velocity for all points and that for several points that showed a clear response is calculated for the final confirmation of the optimum frequency range. Using this optimum frequency range, we can obtain the clearest image of the buried object. From the experimental results, we confirmed the effectiveness of our proposed method. In particular, a clear image of the buried object was obtained when the SLDV image was unclear.

  19. A new fuzzy optimal data replication method for data grid

    Directory of Open Access Journals (Sweden)

    Zeinab Ghilavizadeh

    2013-03-01

    Full Text Available These days, There are several applications where we face with large data set and it has become an important part of common resources in different scientific areas. In fact, there are many applications where there are literally huge amount of information handled either in terabyte or in petabyte. Many scientists apply huge amount of data distributed geographically around the world through advanced computing systems. The huge volume data and calculations have created new problems in accessing, processing and distribution of data. The challenges of data management infrastructure have become very difficult under a large amount of data, different geographical spaces, and complicated involved calculations. Data Grid is a remedy to all mentioned problems. In this paper, a new method of dynamic optimal data replication in data grid is introduced where it reduces the total job execution time and increases the locality in accessibilities by detecting and impacting the factors influencing the data replication. Proposed method is composed of two main phases. During the first phase is the phase of file application and replication operation. In this phase, we evaluate three factors influencing the data replication and determine whether the requested file can be replicated or it can be used from distance. In the second phase or the replacement phase, the proposed method investigates whether there is enough space in the destination to store the requested file or not. In this phase, the proposed method also chooses a replica with the lowest value for deletion by considering three replica factors to increase the performance of system. The results of simulation also indicate the improved performance of our proposed method compared with other replication methods represented in the simulator Optorsim.

  20. Convex functions and optimization methods on Riemannian manifolds

    CERN Document Server

    Udrişte, Constantin

    1994-01-01

    This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...

  1. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...... operation constraints, while the third approach uses nonlinear programming. In the present case the non-linearity occurs in the boiler efficiency of power plants and the cv-value of an extraction plant. The linear programming model is used as a benchmark, as this type is frequently used, and has the lowest...... is increased by 23 %, while for a non-linear approach the increase is more than 39 %. In terms of the total amount of heat produced by heat pumps, the two approaches exceed the reference by approx. 23 % and 32 % respectively. The results indicate a higher coherence between the two latter approaches...

  2. Shape optimized headers and methods of manufacture thereof

    Science.gov (United States)

    Perrin, Ian James

    2013-11-05

    Disclosed herein is a shape optimized header comprising a shell that is operative for collecting a fluid; wherein an internal diameter and/or a wall thickness of the shell vary with a change in pressure and/or a change in a fluid flow rate in the shell; and tubes; wherein the tubes are in communication with the shell and are operative to transfer fluid into the shell. Disclosed herein is a method comprising fixedly attaching tubes to a shell; wherein the shell is operative for collecting a fluid; wherein an internal diameter and/or a wall thickness of the shell vary with a change in pressure and/or a change in a fluid flow rate in the shell; and wherein the tubes are in communication with the shell and are operative to transfer fluid into the shell.

  3. A Semi-Analytic Monte Carlo Method for Optimization Problems

    Science.gov (United States)

    Sale, Kenneth E.

    1997-10-01

    Presently available Monte Carlo radiation transport codes require all aspects of a problem to be fixed so that optimizing a system involves running the code multiple times, once for each alternative value of the parameters that characterize the system (e.g. thickness or shape of an attenuator). By combining the standard Monte Carlo(Lux, Ivan and Koblinger, Laszlo, Monte Carlo Particle Transport Methods: Neutron and Photon Calculations, CRC Press, 1991) algorithm with the Next-Event point flux estimator and a computer algebra system it is possible to calculate the flux at a point as a function of parameters describing the problem rather as a single number for one specific set of parameter values. The calculated flux function is a perturbative estimate about the default values of the problem parameters. Parametric descriptions can be used in the geometry or material specifications. Several examples will be presented.

  4. Inferences on weather extremes and weather-related disasters: A review of statistical methods

    NARCIS (Netherlands)

    Visser, A.C.; Petersen, A.C.

    2012-01-01

    The study of weather extremes and their impacts, such as weather-related disasters, plays an important role in research of climate change. Due to the great societal consequences of extremes - historically, now and in the future - the peer-reviewed literature on this theme has been growing enormously

  5. Optimization of parameters for bonnet polishing based on the minimum residual error method

    Science.gov (United States)

    Wang, Chunjin; Yang, Wei; Ye, Shiwei; Wang, Zhenzhong; Zhong, Bo; Guo, Yinbiao; Xu, Qiao

    2014-07-01

    For extremely high accuracy optical elements, the residual error induced by the superposition of the tool influence function cannot be ignored and leads to medium-high frequency errors. Even though the continuous computer-controlled optical surfacing process is better than the discrete one, which can decrease this error to a certain degree, the error still exists in scanning directions when adopting the raster path. The purpose of this paper is to optimize the parameters used in bonnet polishing to restrain this error. The formation of this error was theoretically demonstrated and will also be further experimentally presented using our newly designed prototype. Orthogonal simulation experiments were designed for the following five major operating parameters (some of them are normalized) at four levels: inner pressure, z offset, raster distance, H-axis speed, and precession angle. The minimum residual error method was used to evaluate the simulations. The results showed the impact of the evaluated parameters on the residual error. The parameters in descending order of impact are as follows: raster distance, z offset, inner pressure, H-axis speed, and precession angle. An optimal combination of these five parameters among the four levels considered, based on the minimum residual error method, was determined.

  6. METHODS FOR OPTIMIZING ENERGY BALANCE IN OVERWEIGHT PEOPLE

    Directory of Open Access Journals (Sweden)

    Malakhova Tatyana Vladimirovna

    2013-06-01

    Full Text Available The article proposes a new approach to solving the problem of overweight and obesity based on the optimization of the energy balance in the body, using the technologies applied in industrial heat power devices. The main task during the formation of the diet is to ensure a steady, moderate and long-term supply of glucose into the blood stream, avoiding one-time drastic jumps in blood sugar levels. The proposed method of weight loss was tested among 46-52 years old women with the excess weight, prone to obesity. The control weighing was performed every 7 days. The study period was 60 weeks. Proper regulation of food composition and “fuel injection” rhythm, optimum from the point of view of thermal technology, allows using “negative calorie effect” against the background of the overall revitalization of metabolic processes. From the first weeks of the application of the proposed method of weight loss a significant reduction in body weight was mathematically observed. An important prerequisite for the success of the method is the correct order of food intake.

  7. New algorithm for extreme temperature measurements

    NARCIS (Netherlands)

    Damean, N.

    2000-01-01

    A new algorithm for measurement of extreme temperature is presented. This algorithm reduces the measurement of the unknown temperature to the solving of an optimal control problem, using a numerical computer. Based on this method, a new device for extreme temperature measurements is projected. It

  8. Optimization of dot blot method to detect bcr/abl transcripts in chronic myeloid leukemia

    Energy Technology Data Exchange (ETDEWEB)

    Tharapel, S.A.; Zhao, J. [Univ. of Tennessee, Memphis, TN (United States)

    1994-09-01

    Detection of abl-bcr fusion transcripts using molecular methodologies is becoming an attractive alternative (or supplement) to traditional cytogenetics in identifying the Philadelphia (Ph) chromosome. Among these methods, RT-PCR technique has provided an extremely powerful tool for improving the detection of bcr/abl translocations through enzymatic amplification of the reverse-transcribed cDNA. The analysis of PCR products can be accomplished by a number of techniques including dot blot following liquid-phase hybridization. In order to render the detection of PCR products more simple, accurate and efficient, and therefore more amenable for the clinical laboratory routine use, we optimized several parameters of the procedure. (1) We discovered that with the starting material of 1 ug of total RNA, the amount of the final PCR amplified products was linear to the PCR cycles between 20 to 30 cycles. Since the dot blot procedure does not separate the amplified products according to their sizes, increased background would increase the false positive rate. (2) If a detection sensitivity of 1 in 10{sup 3} cells is sufficient, then the nested or a second PCR amplification is not necessary. (3) Starting material more than 5 ug of total RNA would decrease the amplification efficiency and therefore compromise the sensitivity. (4) Ten minutes of hybridization gave equal signal intensity as 24 hours. (5) The ionic strength and temperature in the washing step were also tested. Upon optimization of each parameter, the detection procedure was tested on 18 clinical samples. Compared to the procedures that are currently available, our optimized procedure is less time consuming, has higher sensitivity and lower false positive rate. This method has the potential to be automated and therefore can be used as a screening method for Ph chromosome in high volume settings.

  9. Optimization methods for passive damper placement and tuning

    Science.gov (United States)

    Milman, M. H.; Chu, C. C.

    1992-01-01

    The effectiveness of viscous elements in introducing damping in a structure is a function of several variables, including their number, their location in the structure, and their physical properties. In this paper several optimization problems are posed to optimize these variables. The paper investigates various metrics to define the optimization problem, and compares the damping profiles that are obtained. Both discrete and continuous optimization problems are formulated and solved, corresponding, respectively, to the problems of placement of damping elements and to the tuning of their parameters. The paper particularly emphasizes techniques to make feasible the large scale problems resulting from the optimization formulations. Numerical results involving a lightly damped tested structure are presented.

  10. Manual muscle testing: a method of measuring extremity muscle strength applied to critically ill patients.

    Science.gov (United States)

    Ciesla, Nancy; Dinglas, Victor; Fan, Eddy; Kho, Michelle; Kuramoto, Jill; Needham, Dale

    2011-04-12

    Survivors of acute respiratory distress syndrome (ARDS) and other causes of critical illness often have generalized weakness, reduced exercise tolerance, and persistent nerve and muscle impairments after hospital discharge. Using an explicit protocol with a structured approach to training and quality assurance of research staff, manual muscle testing (MMT) is a highly reliable method for assessing strength, using a standardized clinical examination, for patients following ARDS, and can be completed with mechanically ventilated patients who can tolerate sitting upright in bed and are able to follow two-step commands. (7, 8) This video demonstrates a protocol for MMT, which has been taught to ≥ 43 research staff who have performed >800 assessments on >280 ARDS survivors. Modifications for the bedridden patient are included. Each muscle is tested with specific techniques for positioning, stabilization, resistance, and palpation for each score of the 6-point ordinal Medical Research Council scale. Three upper and three lower extremity muscles are graded in this protocol: shoulder abduction, elbow flexion, wrist extension, hip flexion, knee extension, and ankle dorsiflexion. These muscles were chosen based on the standard approach for evaluating patients for ICU-acquired weakness used in prior publications. (1,2).

  11. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  12. Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity

    Science.gov (United States)

    Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.

    2016-01-01

    Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557

  13. Experimental evaluation of optimization method for developing ultraviolet barrier coatings

    Science.gov (United States)

    Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao

    2014-01-01

    Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.

  14. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  15. A second-order unconstrained optimization method for canonical-ensemble density-functional methods.

    Science.gov (United States)

    Nygaard, Cecilie R; Olsen, Jeppe

    2013-03-07

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  16. Development of multi-objective genetic algorithm concurrent subspace optimization (MOGACSSO) method with robustness

    Science.gov (United States)

    Parashar, Sumeet

    Most engineering design problems are complex and multidisciplinary in nature, and quite often require more than one objective (cost) function to be extremized simultaneously. For multi-objective optimization problems, there is not a single optimum solution, but a set of optimum solutions called the Pareto set. The primary goal of this research is to develop a heuristic solution strategy to enable multi-objective optimization of highly coupled multidisciplinary design applications, wherein each discipline is able to retain some degree of autonomous control during the process. To achieve this goal, this research extends the capability of the Multi-Objective Pareto Concurrent Subspace Optimization (MOPCSSO) method to generate large numbers of non-dominated solutions in each cycle, with subsequent update and refinement, thereby greatly increasing efficiency. While the conventional MOPCSSO approach is easily able to generate Pareto solutions, it will only generate one Pareto solution at a time. In order to generate the complete Pareto front, MOPCSSO requires multiple runs (translating into many system convergence cycles) using different initial staring points. In this research, a Genetic Algorithm-based heuristic solution strategy is developed for multi-objective problems in coupled multidisciplinary design. The Multi-Objective Genetic Algorithm Concurrent Subspace Optimization (MOGACSSO) method allows for the generation of relatively evenly distributed Pareto solutions in a faster and more efficient manner than repeated implementation of MOPCSSO. While achieving an optimum design, it is often also desirable that the optimum design be robust to uncontrolled parameter variations. In this research, the capability of the MOGACSSO method is also extended to generate Pareto points that are robust in terms of performance and feasibility, for given uncontrolled parameter variations. The Roust-MOGACSSO method developed in this research can generate a large number of designs

  17. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Optimized shear wave generation using hybrid beamforming methods.

    Science.gov (United States)

    Nabavizadeh, Alireza; Greenleaf, James F; Fatemi, Mostafa; Urban, Matthew W

    2014-01-01

    Elasticity imaging is a medical imaging modality that measures tissue elasticity as an aid in the diagnosis of certain diseases. Shear wave-based methods have been developed to perform elasticity measurements in soft tissue. These methods often use the radiation force mechanism of focused ultrasound to induce shear waves in soft tissue such as liver, kidney, breast, thyroid and skeletal muscle. The efficiency of the ultrasound beam in producing broadband extended shear waves in soft tissue is very important to the widespread use of this modality. Hybrid beamforming combines two types of focusing, conventional spherical focusing and axicon focusing, to produce a beam for generating a shear wave that has increased depth-of-field (DOF) so that measurements can be made with a shear wave with a consistent wave front. Spherical focusing is used in many applications to achieve high lateral resolution, but has low DOF. Axicon focusing, with a cone-shaped transducer, can provide good lateral resolution with large DOF. We describe our linear aperture design and beam optimization performed using angular spectrum simulations. We performed a large parametric simulation study in which we varied the focal depth for the spherical focusing portion of the aperture, the numbers of elements devoted to the spherical and axicon focusing portions of the aperture and the opening angle used for axicon focusing. The hybrid beamforming method was experimentally tested in two phantoms, and shear wave speed measurement accuracy and DOF for each hybrid beam were evaluated. We compared our results with those for shear waves generated using only spherical focusing. The results of this study indicate that hybrid beamforming is capable of producing a beam with increased DOF over which accurate shear wave speed measurements can be made for different-size apertures and at different focal depths. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All

  19. Rainfall extremes, weather and climatic characterization over complex terrain: A data-driven approach based on signal enhancement methods and extreme value modeling

    Science.gov (United States)

    Pineda, Luis E.; Willems, Patrick

    2017-04-01

    Weather and climatic characterization of rainfall extremes is both of scientific and societal value for hydrometeorogical risk management, yet discrimination of local and large-scale forcing remains challenging in data-scarce and complex terrain environments. Here, we present an analysis framework that separate weather (seasonal) regimes and climate (inter-annual) influences using data-driven process identification. The approach is based on signal-to-noise separation methods and extreme value (EV) modeling of multisite rainfall extremes. The EV models use a semi-automatic parameter learning [1] for model identification across temporal scales. At weather scale, the EV models are combined with a state-based hidden Markov model [2] to represent the spatio-temporal structure of rainfall as persistent weather states. At climatic scale, the EV models are used to decode the drivers leading to the shift of weather patterns. The decoding is performed into a climate-to-weather signal subspace, built via dimension reduction of climate model proxies (e.g. sea surface temperature and atmospheric circulation) We apply the framework to the Western Andean Ridge (WAR) in Ecuador and Peru (0-6°S) using ground data from the second half of the 20th century. We find that the meridional component of winds is what matters for the in-year and inter-annual variability of high rainfall intensities alongside the northern WAR (0-2.5°S). There, low-level southerly winds are found as advection drivers for oceanic moist of the normal-rainy season and weak/moderate the El Niño (EN) type; but, the strong EN type and its unique moisture surplus is locally advected at lowlands in the central WAR. Moreover, the coastal ridges, south of 3°S dampen meridional airflows, leaving local hygrothermal gradients to control the in-year distribution of rainfall extremes and their anomalies. Overall, we show that the framework, which does not make any prior assumption on the explanatory power of the weather

  20. Optimal Control with Time Delays via the Penalty Method

    Directory of Open Access Journals (Sweden)

    Mohammed Benharrat

    2014-01-01

    Full Text Available We prove necessary optimality conditions of Euler-Lagrange type for a problem of the calculus of variations with time delays, where the delay in the unknown function is different from the delay in its derivative. Then, a more general optimal control problem with time delays is considered. Main result gives a convergence theorem, allowing us to obtain a solution to the delayed optimal control problem by considering a sequence of delayed problems of the calculus of variations.

  1. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  2. Electro-Fenton oxidation of coking wastewater: optimization using the combination of central composite design and convex optimization method.

    Science.gov (United States)

    Zhang, Bo; Sun, Jiwei; Wang, Qin; Fan, Niansi; Ni, Jialing; Li, Weicheng; Gao, Yingxin; Li, Yu-You; Xu, Changyou

    2017-10-01

    The electro-Fenton treatment of coking wastewater was evaluated experimentally in a batch electrochemical reactor. Based on central composite design coupled with response surface methodology, a regression quadratic equation was developed to model the total organic carbon (TOC) removal efficiency. This model was further proved to accurately predict the optimization of process variables by means of analysis of variance. With the aid of the convex optimization method, which is a global optimization method, the optimal parameters were determined as current density of 30.9 mA/cm2, Fe2+ concentration of 0.35 mg/L, and pH of 4.05. Under the optimized conditions, the corresponding TOC removal efficiency was up to 73.8%. The maximum TOC removal efficiency achieved can be further confirmed by the results of gas chromatography-mass spectrum analysis.

  3. Influence of Pareto optimality on the maximum entropy methods

    Science.gov (United States)

    Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.

    2017-07-01

    Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.

  4. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  5. Optimization of Methods Verifying Volunteers' Ability to Provide Hospice Care.

    Science.gov (United States)

    Szeliga, Marta; Mirecka, Jadwiga

    2016-12-14

    The subject of the presented work was an attempt at optimization of the methods used for verification of the candidates for medical voluntary workers in a hospice and decreasing the danger of a negative influence of an incompetent volunteer on a person in a terminal stage of a disease and his or her relatives. The study was carried out in St. Lazarus Hospice in Krakow, Poland, and included 154 adult participants in four consecutive editions of "A course for volunteers - a guardian of the sick" organized by the hospice. In order to improve the recruitment of these workers, the hitherto methods of selection (an interview with the coordinator of volunteering and no less than 50% of attendance in classes of a preparatory course for volunteers") were expanded by additional instruments-the tests whose usefulness was examined in practice. Knowledge of candidates was tested with the use of a written examination which consisted of four open questions and an MCQ test comprising 31 questions. Practical abilities were checked by the Objective Structured Clinical Examination (OSCE). A reference point for the results of these tests was a hidden standardized long-term observation carried out during the subsequent work of the volunteers in the stationary ward in the hospice using the Amsterdam Attitude and Communication Scale (AACS). Among the tests used, the greatest value (confirmed by a quantitative and qualitative analysis) in predicting how a given person would cope with practical tasks and in contact with the sick and their relatives had a practical test of the OSCE type.

  6. Application of Taguchi method for cutting force optimization in rock ...

    Indian Academy of Sciences (India)

    In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...

  7. A modified harmony search based method for optimal rural radial ...

    African Journals Online (AJOL)

    In this work, a Harmony Search (HS) based optimization approach is developed to solve the radial line planning problem. Furthermore, some modifications to the HS are presented for improving the computational efficiency of optimization problems with strongly interrelated mixed variables. A sample system is served for ...

  8. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  9. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM...

  10. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    extrapolation techniques: the Weibull, Gumbel and Pareto distributions and a double-exponential asymptotic extreme value function based on the ACER method. For the successful implementation of a fully automated extrapolation process, we have developed a procedure for automatic identification of tail threshold...

  11. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Preisler, Jan [Iowa State Univ., Ames, IA (United States)

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  12. Homotopy method for optimization of variable-specific-impulse low-thrust trajectories

    Science.gov (United States)

    Chi, Zhemin; Yang, Hongwei; Chen, Shiyu; Li, Junfeng

    2017-11-01

    The homotopy method has been used as a useful tool in solving fuel-optimal trajectories with constant-specific-impulse low thrust. However, the specific impulse is often variable for many practical solar electric power-limited thrusters. This paper investigates the application of the homotopy method for optimization of variable-specific-impulse low-thrust trajectories. Difficulties arise when the two commonly-used homotopy functions are employed for trajectory optimization. The optimal power throttle level and the optimal specific impulse are coupled with the commonly-used quadratic and logarithmic homotopy functions. To overcome these difficulties, a modified logarithmic homotopy function is proposed to serve as a gateway for trajectory optimization, leading to decoupled expressions of both the optimal power throttle level and the optimal specific impulse. The homotopy method based on this homotopy function is proposed. Numerical simulations validate the feasibility and high efficiency of the proposed method.

  13. The Multiobjective Trajectory Optimization for Hypersonic Glide Vehicle Based on Normal Boundary Intersection Method

    Directory of Open Access Journals (Sweden)

    Zhengnan Li

    2016-01-01

    Full Text Available To solve the multiobjective optimization problem on hypersonic glider vehicle trajectory design subjected to complex constraints, this paper proposes a multiobjective trajectory optimization method that combines the boundary intersection method and pseudospectral method. The multiobjective trajectory optimization problem (MTOP is established based on the analysis of the feature of hypersonic glider vehicle trajectory. The MTOP is translated into a set of general optimization subproblems by using the boundary intersection method and pseudospectral method. The subproblems are solved by nonlinear programming algorithm. In this method, the solution that has been solved is employed as the initial guess for the next subproblem so that the time consumption of the entire multiobjective trajectory optimization problem shortens. The maximal range and minimal peak heat problem is solved by the proposed method. The numerical results demonstrate that the proposed method can obtain the Pareto front of the optimal trajectory, which can provide the reference for the trajectory design of hypersonic glider vehicle.

  14. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  15. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...... permeability dielectric metamaterial design problem. The optimization algorithm uses the Finite Element Method (FEM) for solving the equilibrium and adjoint equations, and design problems are formulated for both two- and three-dimensional cases. First, the level set-based topology optimization method...

  16. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal...

  17. Determination of optimal whole body vibration amplitude and frequency parameters with plyometric exercise and its influence on closed-chain lower extremity acute power output and EMG activity in resistance trained males

    Science.gov (United States)

    Hughes, Nikki J.

    The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p power output.

  18. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  19. Intelligent and nature inspired optimization methods in medicine

    DEFF Research Database (Denmark)

    Marinakis, Yannis; Marinaki, Magdalene; Dounias, Georgios

    2009-01-01

    , decrease noise and improve speed by the elimination of irrelevant or redundant features. The present paper deals with the optimization of nearest neighbour classifiers via intelligent and nature inspired algorithms for a very significant medical problem, the Pap smear cell classification problem....... The algorithms used include tabu search, genetic algorithms, particle swarm optimization and ant colony optimization. The proposed complete algorithmic scheme is tested on two sets of data. The first consists of 917 images of Pap smear cells and the second set consists of 500 images, classified carefully...

  20. Iron Pole Shape Optimization of IPM Motors Using an Integrated Method

    Directory of Open Access Journals (Sweden)

    JABBARI, A.

    2010-02-01

    Full Text Available An iron pole shape optimization method to reduce cogging torque in Interior Permanent Magnet (IPM motors is developed by using the reduced basis technique coupled by finite element and design of experiments methods. Objective function is defined as the minimum cogging torque. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the rotor pole shape optimization of a 4-poles/24-slots IPM motor.

  1. Optimal method to achieve consistently low defibrillation energy requirements.

    Science.gov (United States)

    Winter, J; Zimmermann, N; Lidolt, H; Dees, H; Perings, C; Vester, E G; Poll, L; Schipke, J D; Contzen, K; Gams, E

    2000-11-02

    Reduction of the defibrillation energy requirement offers the opportunity to decrease implantable cardioverter defibrillator (ICD) size and to increase device longevity. Therefore, the purpose of this prospective study was to obtain confirmed defibrillation thresholds (DFTs) of generator (TRIAD lead system: RV- --> SVC+ + CAN+). According to our previous clinical and experimental studies, we tried to lower DFTs that were > 15 J by repositioning the distal coil of the endocardial lead system in the right ventricle. A total of 190 consecutive patients requiring ICDs for ventricular fibrillation and/or recurrent ventricular tachycardia were investigated at the time of ICD implantation (42 women, 148 men; mean age 61.9 +/- 12.0 years; mean left ventricular ejection fraction 42.7 +/- 16.6%). Coronary artery disease was present in 139 patients; nonischemic dilated cardiomyopathy in 34 patients; and other etiologies in 17 patients; 47 patients had undergone previous cardiac surgery. Regardless of optimal pacing and sensing parameters, for patients having DFTs > 15, we repositioned the distal coil of the endocardial lead system toward the intraventricular septum to include this part of both ventricles within the electrical defibrillating field. In 177 of 190 patients, induced ventricular fibrillation was successfully terminated with 15 J (group II). In all patients, repositioning was successful within a 15 J energy level (100% success). The mean DFT(plus) was 7.3 +/- 3.5 J (group I) and 11.0 +/- 4.5 J (group II; psimple and effective method to reduce intraoperative high DFTs. As a result of this procedure, ICDs with a 20 J output should be sufficient for the vast majority (87%) of our patients. Furthermore, we were able to avoid additional subcutaneous or epicardial electrodes in all patients.

  2. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    Science.gov (United States)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  3. Solving Nonlinear Optimization Problems of Real Functions in Complex Variables by Complex-Valued Iterative Methods.

    Science.gov (United States)

    Zhang, Songchuan; Xia, Youshen

    2018-01-01

    Much research has been devoted to complex-variable optimization problems due to their engineering applications. However, the complex-valued optimization method for solving complex-variable optimization problems is still an active research area. This paper proposes two efficient complex-valued optimization methods for solving constrained nonlinear optimization problems of real functions in complex variables, respectively. One solves the complex-valued nonlinear programming problem with linear equality constraints. Another solves the complex-valued nonlinear programming problem with both linear equality constraints and an -norm constraint. Theoretically, we prove the global convergence of the proposed two complex-valued optimization algorithms under mild conditions. The proposed two algorithms can solve the complex-valued optimization problem completely in the complex domain and significantly extend existing complex-valued optimization algorithms. Numerical results further show that the proposed two algorithms have a faster speed than several conventional real-valued optimization algorithms.

  4. A new method to optimize natural convection heat sinks

    Science.gov (United States)

    Lampio, K.; Karvinen, R.

    2017-08-01

    The performance of a heat sink cooled by natural convection is strongly affected by its geometry, because buoyancy creates flow. Our model utilizes analytical results of forced flow and convection, and only conduction in a solid, i.e., the base plate and fins, is solved numerically. Sufficient accuracy for calculating maximum temperatures in practical applications is proved by comparing the results of our model with some simple analytical and computational fluid dynamics (CFD) solutions. An essential advantage of our model is that it cuts down on calculation CPU time by many orders of magnitude compared with CFD. The shorter calculation time makes our model well suited for multi-objective optimization, which is the best choice for improving heat sink geometry, because many geometrical parameters with opposite effects influence the thermal behavior. In multi-objective optimization, optimal locations of components and optimal dimensions of the fin array can be found by simultaneously minimizing the heat sink maximum temperature, size, and mass. This paper presents the principles of the particle swarm optimization (PSO) algorithm and applies it as a basis for optimizing existing heat sinks.

  5. Facile and innovative method for bioglass surface modification: Optimization studies.

    Science.gov (United States)

    Lopes, João Henrique; Fonseca, Emanuella Maria Barreto; Mazali, Italo O; Magalhães, Alviclér; Landers, Richard; Bertran, Celso Aparecido

    2017-03-01

    In this work it is presented a facile and novel method for modification of bioglass surface based on (Ca molten salt bath 2+ |Na glass + ) ion exchange by immersion in molten salt bath. This method allows changing selectively the chemical composition of a surface layer of glass, creating a new and more reactive bioglass in a shell that surrounds the unchanged bulk of the original BG45S5 bioglass (core-shell type system). The modified bioglass conserves the non-crystalline structure of BG45S5 bioglass and presents a significant increase of surface reactivity in comparison with BG45S5. Melt-derived bioactive glasses BG45S5 with the nominal composition of 46.1mol% SiO 2 , 24.4mol% Na 2 O, 26.9mol% CaO, and 2.6mol% P 2 O 5 have been subjected to ion exchange at 480°C in molten mixture of Ca(NO 3 ) 2 and NaNO 3 with molar ratio of 70:30 for different time periods ranging from 0 to 60min. The optimization studies by using XRF and XRD showed that ion exchange time of 30min is enough to achieve higher changes on the glass surface without alters its non-crystalline structure. The chemical composition, morphology and structure of BG45S5 and bioglass with modified surface were studied by using several analytical techniques. FTIR and O 1s XPS results showed that the modification of glass surface favors the formation of Si-O NBO groups at the expense of SiO BO Si bonds. 29 Si MAS-NMR studies showed that the connectivity of Si Q n species decreases from cross-linked Si Q 3 units to chain-like Si Q 2 units and finally to depolymerized Si Q 1 and Si Q 0 units after ion exchange. This result is consistent with the chemical model based on the enrichment with calcium ions of the bioglass surface such that the excess of positive charges is balanced by depolymerization of silicate network. The pH changes in the early steps of reaction of bioactive glasses BG45S5 and BG45Ca30, in deionized water or solutions buffered with HEPES were investigated. BG45Ca30 bioactive glass exhibited a

  6. Development of Combinatorial Methods for Alloy Design and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Pharr, George M.; George, Easo P.; Santella, Michael L

    2005-07-01

    The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very

  7. A novel metaheuristic method for solving constrained engineering optimization problems: Drone Squadron Optimization

    OpenAIRE

    de Melo, Vinícius Veloso

    2017-01-01

    Several constrained optimization problems have been adequately solved over the years thanks to advances in the metaheuristics area. In this paper, we evaluate a novel self-adaptive and auto-constructive metaheuristic called Drone Squadron Optimization (DSO) in solving constrained engineering design problems. This paper evaluates DSO with death penalty on three widely tested engineering design problems. Results show that the proposed approach is competitive with some very popular metaheuristics.

  8. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  9. Application of multi-stage Monte Carlo method for solving machining optimization problems

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2014-08-01

    Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed.

  10. Approximate Analytical Solutions of the Regularized Long Wave Equation Using the Optimal Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    Constantin Bota

    2014-01-01

    Full Text Available The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results.

  11. The Strain Index: a proposed method to analyze jobs for risk of distal upper extremity disorders.

    Science.gov (United States)

    Moore, J S; Garg, A

    1995-05-01

    Based on existing knowledge and theory of the physiology, biomechanics, and epidemiology of distal upper extremity disorders, a semiquantitative job analysis methodology was developed. The methodology involves the measurement or estimation of six task variables (intensity of exertion, duration of exertion per cycle, efforts per minute, wrist posture, speed of exertion, and duration of task per day); assignment of an ordinal rating for each variable according to exposure data; then assignment of a multiplier value for each variable. The Strain Index is the product of these six multipliers. Preliminary testing suggests that the methodology accurately identifies jobs associated with distal upper extremity disorders versus jobs that are not; however, large-scale studies are needed to validate and update the proposed methodology.

  12. Application of Numerical Optimization Methods to Perform Molecular Docking on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. A. Farkov

    2014-01-01

    Full Text Available An analysis of numerical optimization methods for solving a problem of molecular docking has been performed. Some additional requirements for optimization methods according to GPU architecture features were specified. A promising method for implementation on GPU was selected. Its implementation was described and performance and accuracy tests were performed.

  13. Information method of optimization parameters in the diagnosis of gas turbine engines

    Directory of Open Access Journals (Sweden)

    G. S. Zontov

    2015-01-01

    Full Text Available This article describes an algorithm parameter optimization method for the diagnosis of GTD in order to devide zones of efficiency. The autor focuses on the retional combination of methods of mathematical analysis and statistics and de-veloping an algorithm that allows to optimize the use of mathematical methods by longitudinal data collection and ma-chine learning.

  14. Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods

    DEFF Research Database (Denmark)

    Okulov, Valery; Mikkelsen, Robert Flemming; Litvinov, I. V.

    2015-01-01

    The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the mai...

  15. An Improved Optimization Method for the Relevance Voxel Machine

    DEFF Research Database (Denmark)

    Ganz, Melanie; Sabuncu, M. R.; Van Leemput, Koen

    2013-01-01

    In this paper, we will re-visit the Relevance Voxel Machine (RVoxM), a recently developed sparse Bayesian framework used for predicting biological markers, e.g., presence of disease, from high-dimensional image data, e.g., brain MRI volumes. The proposed improvement, called IRVoxM, mitigates...... the shortcomings of the greedy optimization scheme of the original RVoxM algorithm by exploiting the form of the marginal likelihood function. In addition, it allows voxels to be added and deleted from the model during the optimization. In our experiments we show that IRVoxM outperforms RVoxM on synthetic data...

  16. Improving programming skills of Mechanical Engineering students by teaching in C# multi-objective optimizations methods

    National Research Council Canada - National Science Library

    Adrian Florea; Ileana Ioana Cofaru

    2017-01-01

    .... This paper represents a software development guide for designers of suspension systems with less programming skills that will enable them to implement their own optimization methods that improve...

  17. A method of validating climate models in climate research with a view to extreme events; Eine Methode zur Validierung von Klimamodellen fuer die Klimawirkungsforschung hinsichtlich der Wiedergabe extremer Ereignisse

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, U.

    2000-08-01

    A method is presented to validate climate models with respect to extreme events which are suitable for risk assessment in impact modeling. The algorithm is intended to complement conventional techniques. These procedures mainly compare simulation results with reference data based on single or only a few climatic variables at the same time under the aspect how well a model performs in reproducing the known physical processes of the atmosphere. Such investigations are often based on seasonal or annual mean values. For impact research, however, extreme climatic conditions with shorter typical time scales are generally more interesting. Furthermore, such extreme events are frequently characterized by combinations of individual extremes which require a multivariate approach. The validation method presented here basically consists of a combination of several well-known statistical techniques, completed by a newly developed diagnosis module to quantify model deficiencies. First of all, critical threshold values of key climatic variables for impact research have to be derived serving as criteria to define extreme conditions for a specific activity. Unlike in other techniques, the simulation results to be validated are interpolated to the reference data sampling points in the initial step of this new technique. Besides that fact that the same spatial representation is provided in this way in both data sets for the next diagnostic steps, this procedure also enables to leave the reference basis unchanged for any type of model output and to perform the validation on a real orography. To simultaneously identify the spatial characteristics of a given situation regarding all considered extreme value criteria, a multivariate cluster analysis method for pattern recognition is separately applied to both simulation results and reference data. Afterwards, various distribution-free statistical tests are applied depending on the specific situation to detect statistical significant

  18. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    non-convex and can, in their natural formulations, normally not be solved to global optimality. Hence, most of the articles in this thesis rely on equivalent problem reformulations with certain desirable properties in combination with developments of advanced special purpose global optimization......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...... in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal shape and the topology of the structure. In some cases also the optimal material properties can be determined. Optimal structural design problems are modeled...

  19. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Directory of Open Access Journals (Sweden)

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  20. Optimization of porthole die geometrical variables by Taguchi method

    Science.gov (United States)

    Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.

    2017-10-01

    Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.

  1. A topology optimization method for design of negative permeability metamaterials

    DEFF Research Database (Denmark)

    Diaz, A. R.; Sigmund, Ole

    2010-01-01

    A methodology based on topology optimization for the design of metamaterials with negative permeability is presented. The formulation is based on the design of a thin layer of copper printed on a dielectric, rectangular plate of fixed dimensions. An effective media theory is used to estimate...

  2. Use of Simplex Method in Determination of Optimal Rational ...

    African Journals Online (AJOL)

    ... of application was indicated. The optimal rational composition was found to be: Nsu Clay = 47.8%, quartz = 33.7% and CaCO3 = 18.5%. The other clay from Ukpor was found unsuitable at the firing temperature (l000°C) used. It showed bending strength lower than the standard requirement for all compositions studied.

  3. Optimization and Development of a Human Scent Collection Method

    Science.gov (United States)

    2007-06-04

    individual from New Mexico , the individual’s new place of residence. For the previous seven years the individual had lived in the same residence in...Moreno, Infant Food From Quality Protein Maize and Chickpea: Optimization for Preparing and Nutritional Properties. Int J Food Sci Nutr, 2005. 56(4

  4. Workload Indicators Of Staffing Need Method in determining optimal ...

    African Journals Online (AJOL)

    ... available working hours, category and individual allowances, annual workloads from the previous year\\'s statistics and optimal departmental establishment of workers. Results: There was initial resentment to the exercise because of the notion that it was aimed at retrenching workers. The team was given autonomy by the ...

  5. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...... as optimization problems and solved by numerical methods. The objective function in the problem often models the weight or stiffness of the structure. The functions defining the feasible set of the problem limit the structural response under loading. The constraint functions often model displacements, strains...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal...

  6. POSSIBILITIES OF USING MONTE CARLO METHOD FOR SOLVING MACHINING OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2014-04-01

    Full Text Available Companies operating in today's machining environment are focused on improving their product quality and decreasing manufacturing cost and time. In their attempts to meet these objectives, the machining processes optimization is of prime importance. Among the traditional optimization methods, in recent years, modern meta-heuristic algorithms are being increasingly applied to solving machining optimization problems. Regardless of numerous capabilities of the Monte Carlo method, its application for solving machining optimization problems has been given less attention by researchers and practitioners. The aim of this paper is to investigate the Monte Carlo method applicability for solving single-objective machining optimization problems and to analyze its efficiency by comparing the optimization solutions to those obtained by the past researchers using meta-heuristic algorithms. For this purpose, five machining optimization case studies taken from the literature are considered and discussed.

  7. General method for automatic on-line beamline optimization based on genetic algorithm.

    Science.gov (United States)

    Xi, Shibo; Borgna, Lucas Santiago; Du, Yonghua

    2015-05-01

    It is essential but inconvenient to perform high-quality on-line optimization for synchrotron radiation beamlines. Usually, synchrotron radiation beamlines are optimized manually, which is time-consuming and difficult to obtain global optimization for all optical elements of the beamline. In this contribution a general method based on the genetic algorithm for automatic beamline optimization is introduced. This method can optimize all optical components of any beamline simultaneously and efficiently. To test this method, a program developed using LabVIEW is examined at the XAFCA beamline of the Singapore Synchrotron Light Source to optimize the beam flux at the sample position. The results demonstrate that the beamline can be optimized within 17 generations even when the initial flux is as low as 4% of its maximum value.

  8. The Hybrid BFGS-CG Method in Solving Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Mohd Asrul Hery Ibrahim

    2014-01-01

    Full Text Available In solving large scale problems, the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Hence, a new hybrid method, known as the BFGS-CG method, has been created based on these properties, combining the search direction between conjugate gradient methods and quasi-Newton methods. In comparison to standard BFGS methods and conjugate gradient methods, the BFGS-CG method shows significant improvement in the total number of iterations and CPU time required to solve large scale unconstrained optimization problems. We also prove that the hybrid method is globally convergent.

  9. Methods for Improving Long-Range Wireless Communication Between Extreme Terrain Vehicles

    Science.gov (United States)

    Johnson, Paul; Zarzhitsky, Dimitri

    2012-01-01

    Axel is an extreme terrain, two-wheeled rover designed to traverse rocky surface and sub-surface landscapes in order to conduct remote science experiments in hard-to-reach locations. The rover's design meets many requirements for a mobile research platform capable of reaching water seeps on Martian cliff sides. Axel was developed by the Mobility and Robotic Systems section at the Caltech Jet Propulsion Laboratory. Unique design criteria associated with extreme terrain mobility led to a unique rover solution, consisting of a central module, which provides long-term energy storage and space for large-scale science payloads, and two detachable Axels that can detach and explore extreme terrain locations that are inaccessible to conventional rovers. The envisioned mission could involve a four-wheeled configuration of Axel called 'DuAxel' that is able to traverse the benign, flattened terrain of a landing site and approach the edge of the targeted crater or cave where it would deploy anchoring legs and detach one of the Axel rovers [1]. A tether provides a secure link between the Axel rover and the central module, acting as an anchor to allow Axel to descend along steep crater walls to collect data from the scientifically relevant sites along the water seeps or crater ledges. After completing its scientific mission Axel would hoist itself up to the central module and dock autonomously (using its on-board stereo cameras), allowing the once-again recombined DuAxel to travel to another location to repeat data collection.

  10. Optimizing APS ceramic coatings using response surface methods

    Energy Technology Data Exchange (ETDEWEB)

    Varacalle, D.J. Jr.; Wilson, G.C. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Steeper, T.J. [Savannah River Lab., Aiken, SC (United States); Nerz, J.E. [Metco/Perkin-Elmer, Westbury, NY (United States); Riggs, W.L. II [TubalCain Co., Loveland, OH (United States)

    1994-12-31

    This paper presents a statistical design of experiment study of air plasma-sprayed (APS) alumina-titania powder. In this study a prior coating design has been further optimized for the effects of horizontal speed, rotational speed, and powder feed rate. The analysis was conducted using response surface methodologies. This alumina-titania powder system is being used to fabricate heater tubes that emulate nuclear fuel tubes for use in thermal-hydraulic testing. The study investigated a substantial range of plasma processing conditions and their effect on the resultant coatings. The coatings were characterized by hardness tests, electrical tests, and optical metallography (including image analysis). Coating qualities are discussed with respect to dielectric strength, hardness, porosity, surface roughness, and microstructure. Attributes of the coatings are correlated with the changes in operating parameters. The study determined an optimized coating design for this specific application.

  11. Autonomous guided vehicles methods and models for optimal path planning

    CERN Document Server

    Fazlollahtabar, Hamed

    2015-01-01

      This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

  12. Scalable Optimization Methods for Distribution Networks With High PV Integration

    Energy Technology Data Exchange (ETDEWEB)

    Guggilam, Swaroop S.; Dall' Anese, Emiliano; Chen, Yu Christine; Dhople, Sairaj V.; Giannakis, Georgios B.

    2016-07-01

    This paper proposes a suite of algorithms to determine the active- and reactive-power setpoints for photovoltaic (PV) inverters in distribution networks. The objective is to optimize the operation of the distribution feeder according to a variety of performance objectives and ensure voltage regulation. In general, these algorithms take a form of the widely studied ac optimal power flow (OPF) problem. For the envisioned application domain, nonlinear power-flow constraints render pertinent OPF problems nonconvex and computationally intensive for large systems. To address these concerns, we formulate a quadratic constrained quadratic program (QCQP) by leveraging a linear approximation of the algebraic power-flow equations. Furthermore, simplification from QCQP to a linearly constrained quadratic program is provided under certain conditions. The merits of the proposed approach are demonstrated with simulation results that utilize realistic PV-generation and load-profile data for illustrative distribution-system test feeders.

  13. Mesh Adaptive Direct Search Methods for Constrained Nonsmooth Optimization

    Science.gov (United States)

    2012-02-24

    presence will extend our collaboration circle to mechanical engineering researchers. • We have initiated a new collaboration with A.D. Pelton from chemi...Published: 1. A.E. Gheribi, C. Audet, S. Le Digabel, E. Blisle, C.W. Bale and A. D. Pelton . Calculating optimal conditions for alloy and process...Gheribi, C. Robelin, S. Le Digabel, C. Audet and A.D. Pelton . Calculating All Local Minima on Liquidus Surfaces Using the FactSage Software and Databases

  14. Portfolio Methods for Optimal Planning: an Empirical Analysis

    OpenAIRE

    Rizzini, Mattia; Fawcett, Chris; Vallati, Mauro; Gerevini, Alfonso Emilio; Hoos, Holger

    2015-01-01

    Combining the complementary strengths of several algorithms through portfolio approaches has been demonstrated to be effective in solving a wide range of AI problems. Notably, portfolio techniques have been prominently applied to suboptimal (satisficing) AI planning. Here, we consider the construction of sequential planner portfolios for (domain- independent) optimal planning. Specifically, we introduce four techniques (three of which are dynamic) for per-instance planner schedule generation ...

  15. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2015-01-01

    Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  16. Blade pitch optimization methods for vertical-axis wind turbines

    Science.gov (United States)

    Kozak, Peter

    Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.

  17. An Optimal Power Flow (OPF) Method with Improved Power System Stability

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    This paper proposes an optimal power flow (OPF) method taking into account small signal stability as additional constraints. Particle swarm optimization (PSO) algorithm is adopted to realize the OPF process. The method is programmed in MATLAB and implemented to a nine-bus test power system which...... has large-scale wind power integration. The results show the ability of the proposed method to find optimal (or near-optimal) operating points in different cases. Based on these results, the analysis of the impacts of wind power integration on the system small signal stability has been conducted....

  18. Bionic optimization in structural design stochastically based methods to improve the performance of parts and assemblies

    CERN Document Server

    Gekeler, Simon

    2016-01-01

    The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware. Bionic Optimization means finding the best solution to a problem using methods found in nature. As Evolutionary Strategies and Particle Swarm Optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them. A set of sample applications shows how Bionic Optimization works in practice. From academic studies on simple fra...

  19. Numerical solution of the state-delayed optimal control problems by a fast and accurate finite difference θ-method

    Science.gov (United States)

    Hajipour, Mojtaba; Jajarmi, Amin

    2018-02-01

    Using the Pontryagin's maximum principle for a time-delayed optimal control problem results in a system of coupled two-point boundary-value problems (BVPs) involving both time-advance and time-delay arguments. The analytical solution of this advance-delay two-point BVP is extremely difficult, if not impossible. This paper provides a discrete general form of the numerical solution for the derived advance-delay system by applying a finite difference θ-method. This method is also implemented for the infinite-time horizon time-delayed optimal control problems by using a piecewise version of the θ-method. A matrix formulation and the error analysis of the suggested technique are provided. The new scheme is accurate, fast and very effective for the optimal control of linear and nonlinear time-delay systems. Various types of finite- and infinite-time horizon problems are included to demonstrate the accuracy, validity and applicability of the new technique.

  20. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    Science.gov (United States)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  1. Interior Point Method Evaluation for Reactive Power Flow Optimization in the Power System

    Directory of Open Access Journals (Sweden)

    Zbigniew Lubośny

    2013-03-01

    Full Text Available The paper verifies the performance of an interior point method in reactive power flow optimization in the power system. The study was conducted on a 28 node CIGRE system, using the interior point method optimization procedures implemented in Power Factory software.

  2. Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre

    DEFF Research Database (Denmark)

    Steensgaard, Randi; Dahl Hoffmann, Dorte

    “Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim......“Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim...

  3. Damage approach : A new method for topology optimization with local stress constraints

    NARCIS (Netherlands)

    Verbart, A.; Langelaar, M.; Van Keulen, A.

    2015-01-01

    In this paper, we propose a new method for topology optimization with local stress constraints. In this method, material in which a stress constraint is violated is considered as damaged. Since damaged material will contribute less to the overall performance of the structure, the optimizer will

  4. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion of so...

  5. An intelligent scheduling method based on improved particle swarm optimization algorithm for drainage pipe network

    Science.gov (United States)

    Luo, Yaqi; Zeng, Bi

    2017-08-01

    This paper researches the drainage routing problem in drainage pipe network, and propose an intelligent scheduling method. The method relates to the design of improved particle swarm optimization algorithm, the establishment of the corresponding model from the pipe network, and the process by using the algorithm based on improved particle swarm optimization to find the optimum drainage route in the current environment.

  6. Improving programming skills of Mechanical Engineering students by teaching in C# multi-objective optimizations methods

    Directory of Open Access Journals (Sweden)

    Florea Adrian

    2017-01-01

    Full Text Available Designing an optimized suspension system that meet the main functions of comfort, safety and handling on poor quality roads is a goal for researchers. This paper represents a software development guide for designers of suspension systems with less programming skills that will enable them to implement their own optimization methods that improve traditional methods by using their domain knowledge.

  7. Vascular blood flow reconstruction from tomographic projections with the adjoint method and receding optimal control strategy

    Science.gov (United States)

    Sixou, B.; Boissel, L.; Sigovan, M.

    2017-10-01

    In this work, we study the measurement of blood velocity with contrast enhanced computed tomography. The inverse problem is formulated as an optimal control problem with the transport equation as constraint. The velocity field is reconstructed with a receding optimal control strategy and the adjoint method. The convergence of the method is fast.

  8. Method for computing the optimal signal distribution and channel capacity.

    Science.gov (United States)

    Shapiro, E G; Shapiro, D A; Turitsyn, S K

    2015-06-15

    An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

  9. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available This study considers the numerical sensitivity calculation for discontinuous gradientonly optimization problems using the complex-step method. The complex-step method was initially introduced to differentiate analytical functions in the late 1960s...

  10. An optimal adaptive finite element method for the Stokes problem

    NARCIS (Netherlands)

    Kondratyuk, Y.; Stevenson, R.

    2008-01-01

    A new adaptive finite element method for solving the Stokes equations is developed, which is shown to converge with the best possible rate. The method consists of 3 nested loops. The outermost loop consists of an adaptive finite element method for solving the pressure from the (elliptic) Schur

  11. Trip optimization system and method for a train

    Science.gov (United States)

    Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung

    2017-08-15

    A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.

  12. Novel Computational Iterative Methods with Optimal Order for Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    F. Soleymani

    2011-01-01

    Full Text Available This paper contributes a very general class of two-point iterative methods without memory for solving nonlinear equations. The class of methods is developed using weight function approach. Per iteration, each method of the class includes two evaluations of the function and one of its first-order derivative. The analytical study of the main theorem is presented in detail to show the fourth order of convergence. Furthermore, it is discussed that many of the existing fourth-order methods without memory are members from this developed class. Finally, numerical examples are taken into account to manifest the accuracy of the derived methods.

  13. A primal-dual interior point method for large-scale free material optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  14. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    which is divided into five chapters. The introduction is followed by 14 scientific articles of which 12 are published in international scientific journals and two are submitted. The first chapter in the introduction presents a brief overview of structural topology optimization and motivates...... methods. The methods are often based on the concept of divide-and-conquer. Despite the proposed theoretical and numerical advances, this thesis clearly indicates that solving large-scale structural topology optimization problems with discrete design variables to proven global optimality is currently......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...

  15. Methods and apparatus for use with extreme ultraviolet light having contamination protection

    Energy Technology Data Exchange (ETDEWEB)

    Chilese, Francis C.; Torczynski, John R.; Garcia, Rudy; Klebanoff, Leonard E.; Delgado, Gildardo R.; Rader, Daniel J.; Geller, Anthony S.; Gallis, Michail A.

    2016-07-12

    An apparatus for use with extreme ultraviolet (EUV) light comprising A) a duct having a first end opening, a second end opening and an intermediate opening intermediate the first end opening the second end opening, B) an optical component disposed to receive EUV light from the second end opening or to send light through the second end opening, and C) a source of low pressure gas at a first pressure to flow through the duct, the gas having a high transmission of EUV light, fluidly coupled to the intermediate opening. In addition to or rather than gas flow the apparatus may have A) a low pressure gas with a heat control unit thermally coupled to at least one of the duct and the optical component and/or B) a voltage device to generate voltage between a first portion and a second portion of the duet with a grounded insulative portion therebetween.

  16. A comparison of tracking methods for extreme cyclones in the Arctic basin

    Directory of Open Access Journals (Sweden)

    Ian Simmonds

    2014-09-01

    Full Text Available Dramatic climate changes have occurred in recent decades over the Arctic region, and very noticeably in near-surface warming and reductions in sea ice extent. In a climatological sense, Arctic cyclone behaviour is linked to the distributions of lower troposphere temperature and sea ice, and hence the monitoring of storms can be seen as an important component of the analysis of Arctic climate. The analysis of cyclone behaviour, however, is not without ambiguity, and different cyclone identification algorithms can lead to divergent conclusions. Here we analyse a subset of Arctic cyclones with 10 state-of-the-art cyclone identification schemes applied to the ERA-Interim reanalysis. The subset is comprised of the five most intense (defined in terms of central pressure Arctic cyclones for each of the 12 calendar months over the 30-yr period from 1 January 1979 to 31 March 2009. There is a considerable difference between the central pressures diagnosed by the algorithms of typically 5–10 hPa. By contrast, there is substantial agreement as to the location of the centre of these extreme storms. The cyclone tracking algorithms also display some differences in the evolution and life cycle of these storms, while overall finding them to be quite long-lived. For all but six of the 60 storms an intense tropopause polar vortex is identified within 555 km of the surface system. The results presented here highlight some significant differences between the outputs of the algorithms, and hence point to the value using multiple identification schemes in the study of cyclone behaviour. Overall, however, the algorithms reached a very robust consensus on most aspects of the behaviour of these very extreme cyclones in the Arctic basin.

  17. A Pareto-optimal refinement method for protein design scaffolds.

    Directory of Open Access Journals (Sweden)

    Lucas Gregorio Nivón

    Full Text Available Computational design of protein function involves a search for amino acids with the lowest energy subject to a set of constraints specifying function. In many cases a set of natural protein backbone structures, or "scaffolds", are searched to find regions where functional sites (an enzyme active site, ligand binding pocket, protein-protein interaction region, etc. can be placed, and the identities of the surrounding amino acids are optimized to satisfy functional constraints. Input native protein structures almost invariably have regions that score very poorly with the design force field, and any design based on these unmodified structures may result in mutations away from the native sequence solely as a result of the energetic strain. Because the input structure is already a stable protein, it is desirable to keep the total number of mutations to a minimum and to avoid mutations resulting from poorly-scoring input structures. Here we describe a protocol using cycles of minimization with combined backbone/sidechain restraints that is Pareto-optimal with respect to RMSD to the native structure and energetic strain reduction. The protocol should be broadly useful in the preparation of scaffold libraries for functional site design.

  18. A Pareto-optimal refinement method for protein design scaffolds.

    Science.gov (United States)

    Nivón, Lucas Gregorio; Moretti, Rocco; Baker, David

    2013-01-01

    Computational design of protein function involves a search for amino acids with the lowest energy subject to a set of constraints specifying function. In many cases a set of natural protein backbone structures, or "scaffolds", are searched to find regions where functional sites (an enzyme active site, ligand binding pocket, protein-protein interaction region, etc.) can be placed, and the identities of the surrounding amino acids are optimized to satisfy functional constraints. Input native protein structures almost invariably have regions that score very poorly with the design force field, and any design based on these unmodified structures may result in mutations away from the native sequence solely as a result of the energetic strain. Because the input structure is already a stable protein, it is desirable to keep the total number of mutations to a minimum and to avoid mutations resulting from poorly-scoring input structures. Here we describe a protocol using cycles of minimization with combined backbone/sidechain restraints that is Pareto-optimal with respect to RMSD to the native structure and energetic strain reduction. The protocol should be broadly useful in the preparation of scaffold libraries for functional site design.

  19. METHOD FOR OPTIMAL RESOLUTION OF MULTI-AIRCRAFT CONFLICTS IN THREE-DIMENSIONAL SPACE

    Directory of Open Access Journals (Sweden)

    Denys Vasyliev

    2017-03-01

    Full Text Available Purpose: The risk of critical proximities of several aircraft and appearance of multi-aircraft conflicts increases under current conditions of high dynamics and density of air traffic. The actual problem is a development of methods for optimal multi-aircraft conflicts resolution that should provide the synthesis of conflict-free trajectories in three-dimensional space. Methods: The method for optimal resolution of multi-aircraft conflicts using heading, speed and altitude change maneuvers has been developed. Optimality criteria are flight regularity, flight economy and the complexity of maneuvering. Method provides the sequential synthesis of the Pareto-optimal set of combinations of conflict-free flight trajectories using multi-objective dynamic programming and selection of optimal combination using the convolution of optimality criteria. Within described method the following are defined: the procedure for determination of combinations of aircraft conflict-free states that define the combinations of Pareto-optimal trajectories; the limitations on discretization of conflict resolution process for ensuring the absence of unobservable separation violations. Results: The analysis of the proposed method is performed using computer simulation which results show that synthesized combination of conflict-free trajectories ensures the multi-aircraft conflict avoidance and complies with defined optimality criteria. Discussion: Proposed method can be used for development of new automated air traffic control systems, airborne collision avoidance systems, intelligent air traffic control simulators and for research activities.

  20. Synthetic lepidocrocite for phosphorous removal from reclaimed water: optimization using convex optimization method and successive adsorption in fixed bed column.

    Science.gov (United States)

    Wang, Qin; Zhang, Bo; Wang, Muhua; Wu, Jiang; Li, Yuyou; Gao, Yingxin; Li, Weicheng; Jin, Yong

    2016-11-01

    The batch and column experimental studies on the adsorption of phosphate onto synthetic lepidocrocite from reclaimed water are presented. A second-order polynomial model in the batch study is successfully applied to describe phosphate immobilization performance using the response surface methodology. The model proposed is further linked with the convex optimization method to determine the optimal variables for maximum phosphate uptake since convex method is a global optimization method. Consequently, under optimal parameters determined as pH of 3.88, an initial P concentration of 0.66 mg/L, and a dosage of 0.15 g, the corresponding phosphate removal efficiency can reach up to 97.4%. Adsorption behavior is further revealed by X-ray photoelectron spectroscopy observation and FTIR spectra. A comparative column study indicates that co-existing competing anions in artificial reclaimed water do not significantly interfere with P adsorption under the neutral condition. The experimental results highlight that synthetic lepidocrocite is an excellent absorbent for sustainable P removal from reclaimed water.

  1. First-order Convex Optimization Methods for Signal and Image Processing

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm

    2012-01-01

    In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...

  2. Control and optimization system and method for chemical looping processes

    Science.gov (United States)

    Lou, Xinsheng; Joshi, Abhinaya; Lei, Hao

    2015-02-17

    A control system for optimizing a chemical loop system includes one or more sensors for measuring one or more parameters in a chemical loop. The sensors are disposed on or in a conduit positioned in the chemical loop. The sensors generate one or more data signals representative of an amount of solids in the conduit. The control system includes a data acquisition system in communication with the sensors and a controller in communication with the data acquisition system. The data acquisition system receives the data signals and the controller generates the control signals. The controller is in communication with one or more valves positioned in the chemical loop. The valves are configured to regulate a flow of the solids through the chemical loop.

  3. Optimal Recognition Method of Human Activities Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Oniga Stefan

    2015-12-01

    Full Text Available The aim of this research is an exhaustive analysis of the various factors that may influence the recognition rate of the human activity using wearable sensors data. We made a total of 1674 simulations on a publically released human activity database by a group of researcher from the University of California at Berkeley. In a previous research, we analyzed the influence of the number of sensors and their placement. In the present research we have examined the influence of the number of sensor nodes, the type of sensor node, preprocessing algorithms, type of classifier and its parameters. The final purpose is to find the optimal setup for best recognition rates with lowest hardware and software costs.

  4. Mathematical foundation of the optimization-based fluid animation method

    DEFF Research Database (Denmark)

    Erleben, Kenny; Misztal, Marek Krzysztof; Bærentzen, Jakob Andreas

    2011-01-01

    We present the mathematical foundation of a fluid animation method for unstructured meshes. Key contributions not previously treated are the extension to include diffusion forces and higher order terms of non-linear force approximations. In our discretization we apply a fractional step method to ......-linear force terms such as surface tension....

  5. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    Determination of types and amounts for clay minerals in soil are important in environmental, agricultural, and geological investigations. Many reliable methods have been established to identify clay mineral types. However, no reliable method for quantitative analysis of clay minerals has been established so far. In this study ...

  6. Using Global Optimization Methods for Acoustic Source Localization

    NARCIS (Netherlands)

    Malgoezar, A.M.N.; Snellen, M.; Simons, D.G.; Sijtsma, P.

    2016-01-01

    Conventional beamforming is a common method to localize sound sources with a microphone array. The method, which is based on the delay-and-sum beamforming, provides an estimate value for the source strength at a given spatial position. It suffers from low spatial resolution at low frequencies, high

  7. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Hundecha, Y.; Lawrence, D.

    be directly used in hydrological models. Hence, statistical downscaling is necessary to address climate change impacts at the catchment scale. This study has been carried out within working group 4 in the FloodFreq COST Action. It compares eight statistical downscaling methods often used in climate change...... impact studies. Four methods are based on change factors and four are bias correction methods. The change factor methods perturb the observations according to changes in precipitation properties estimated from the Regional Climate Models (RCMs). The bias correction methods correct the output from...... the RCMs. The eight methods are used to downscale precipitation output from fifteen RCMs from the ENSEMBLES project for eleven catchments in Europe. The performance of the bias correction methods depends on the catchment, but in all cases they represent an improvement compared to RCM output. The overall...

  8. An Intelligent Optimization Method for Vortex-Induced Vibration Reducing and Performance Improving in a Large Francis Turbine

    Directory of Open Access Journals (Sweden)

    Xuanlin Peng

    2017-11-01

    Full Text Available In this paper, a new methodology is proposed to reduce the vortex-induced vibration (VIV and improve the performance of the stay vane in a 200-MW Francis turbine. The process can be divided into two parts. Firstly, a diagnosis method for stay vane vibration based on field experiments and a finite element method (FEM is presented. It is found that the resonance between the Kármán vortex and the stay vane is the main cause for the undesired vibration. Then, we focus on establishing an intelligent optimization model of the stay vane’s trailing edge profile. To this end, an approach combining factorial experiments, extreme learning machine (ELM and particle swarm optimization (PSO is implemented. Three kinds of improved profiles of the stay vane are proposed and compared. Finally, the profile with a Donaldson trailing edge is adopted as the best solution for the stay vane, and verifications such as computational fluid dynamics (CFD simulations, structural analysis and fatigue analysis are performed to validate the optimized geometry.

  9. The Application of PSO-AFSA Method in Parameter Optimization for Underactuated Autonomous Underwater Vehicle Control

    Directory of Open Access Journals (Sweden)

    Chunmeng Jiang

    2017-01-01

    Full Text Available In consideration of the difficulty in determining the parameters of underactuated autonomous underwater vehicles in multi-degree-of-freedom motion control, a hybrid method that combines particle swarm optimization (PSO with artificial fish school algorithm (AFSA is proposed in this paper. The optimization process of the PSO-AFSA method is firstly introduced. With the control simulation models in the horizontal plane and vertical plane, the PSO-AFSA method is elaborated when applied in control parameter optimization for an underactuated autonomous underwater vehicle. Both simulation tests and field trials were carried out to prove the efficiency of the PSO-AFSA method in underactuated autonomous underwater vehicle control parameter optimization. The optimized control parameters showed admirable control quality by enabling the underactuated autonomous underwater vehicle to reach the desired states with fast convergence.

  10. A simplified method for the cultivation of extreme anaerobic Archaea based on the use of sodium sulfite as reducing agent.

    Science.gov (United States)

    Rothe, O; Thomm, M

    2000-08-01

    The extreme sensitivity of many Archaea to oxygen is a major obstacle for their cultivation in the laboratory and the development of archaeal genetic exchange systems. The technique of Balch and Wolfe (1976) is suitable for the cultivation of anaerobic Archaea but involves time-consuming procedures such as the use of air locks and glove boxes. We describe here a procedure for the cultivation of anaerobic Archaea that is more convenient and faster and allows the preparation of liquid media without the use of an anaerobic chamber. When the reducing agent sodium sulfide (Na2S) was replaced by sodium sulfite (Na2SO3), anaerobic media could be prepared without protection from oxygen outside an anaerobic chamber. Exchange of the headspace of serum bottles by appropriate gases was sufficient to maintain anaerobic conditions in the culture media. Organisms that were unable to utilize sulfite as a source for cellular sulfur were supplemented with hydrogen sulfide. H2S was simply added to the headspace of serum bottles by a syringe. The use of H2S as a source for sulfur minimized the precipitation of cations by sulfide. Representatives of 12 genera of anaerobic Archaea studied here were able to grow in media prepared by this procedure. For the extremely oxygen-sensitive organism Methanococcus thermolithotrophicus, we show that plates could be prepared outside an anaerobic chamber when sulfite was used as reducing agent. The application of this method may faciliate the cultivation and handling of extreme anaerobic Archaea considerably.

  11. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    Science.gov (United States)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  12. Towards an Optimized Method of Olive Tree Crown Volume Measurement

    Directory of Open Access Journals (Sweden)

    Antonio Miranda-Fuentes

    2015-02-01

    Full Text Available Accurate crown characterization of large isolated olive trees is vital for adjusting spray doses in three-dimensional crop agriculture. Among the many methodologies available, laser sensors have proved to be the most reliable and accurate. However, their operation is time consuming and requires specialist knowledge and so a simpler crown characterization method is required. To this end, three methods were evaluated and compared with LiDAR measurements to determine their accuracy: Vertical Crown Projected Area method (VCPA, Ellipsoid Volume method (VE and Tree Silhouette Volume method (VTS. Trials were performed in three different kinds of olive tree plantations: intensive, adapted one-trunked traditional and traditional. In total, 55 trees were characterized. Results show that all three methods are appropriate to estimate the crown volume, reaching high coefficients of determination: R2 = 0.783, 0.843 and 0.824 for VCPA, VE and VTS, respectively. However, discrepancies arise when evaluating tree plantations separately, especially for traditional trees. Here, correlations between LiDAR volume and other parameters showed that the Mean Vector calculated for VCPA method showed the highest correlation for traditional trees, thus its use in traditional plantations is highly recommended.

  13. Towards an Optimized Method of Olive Tree Crown Volume Measurement

    Science.gov (United States)

    Miranda-Fuentes, Antonio; Llorens, Jordi; Gamarra-Diezma, Juan L.; Gil-Ribes, Jesús A.; Gil, Emilio

    2015-01-01

    Accurate crown characterization of large isolated olive trees is vital for adjusting spray doses in three-dimensional crop agriculture. Among the many methodologies available, laser sensors have proved to be the most reliable and accurate. However, their operation is time consuming and requires specialist knowledge and so a simpler crown characterization method is required. To this end, three methods were evaluated and compared with LiDAR measurements to determine their accuracy: Vertical Crown Projected Area method (VCPA), Ellipsoid Volume method (VE) and Tree Silhouette Volume method (VTS). Trials were performed in three different kinds of olive tree plantations: intensive, adapted one-trunked traditional and traditional. In total, 55 trees were characterized. Results show that all three methods are appropriate to estimate the crown volume, reaching high coefficients of determination: R2 = 0.783, 0.843 and 0.824 for VCPA, VE and VTS, respectively. However, discrepancies arise when evaluating tree plantations separately, especially for traditional trees. Here, correlations between LiDAR volume and other parameters showed that the Mean Vector calculated for VCPA method showed the highest correlation for traditional trees, thus its use in traditional plantations is highly recommended. PMID:25658396

  14. Optimization of axial enrichment distribution for BWR fuels using scoping libraries and block coordinate descent method

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2017-03-15

    Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of

  15. Simultaneous modeling and optimization of nonlinear simulated moving bed chromatography by the prediction-correction method.

    Science.gov (United States)

    Bentley, Jason; Sloan, Charlotte; Kawajiri, Yoshiaki

    2013-03-08

    This work demonstrates a systematic prediction-correction (PC) method for simultaneously modeling and optimizing nonlinear simulated moving bed (SMB) chromatography. The PC method uses model-based optimization, SMB startup data, isotherm model selection, and parameter estimation to iteratively refine model parameters and find optimal operating conditions in a matter of hours to ensure high purity constraints and achieve optimal productivity. The PC algorithm proceeds until the SMB process is optimized without manual tuning. In case studies, it is shown that a nonlinear isotherm model and parameter values are determined reliably using SMB startup data. In one case study, a nonlinear SMB system is optimized after only two changes of operating conditions following the PC algorithm. The refined isotherm models are validated by frontal analysis and perturbation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Natural frequency optimization of structures using a soft-kill BESO method

    Science.gov (United States)

    Huang, Xiaodong; Xie, Yi Min

    2010-06-01

    Frequency optimization is of great importance in the design of machines and structures subjected to dynamic loading. When the natural frequencies of considered structures are maximized using the Solid Isotropic Material with Penalization (SIMP) model, artificial localized modes may occur in areas where elements are assigned with low density values. In this paper, a modified SIMP model is developed to effectively avoid the artificial modes. Based on this model, a new bi-directional evolutionary structural optimization (BESO) method combined with rigorous optimality criteria is developed for topology frequency optimization problems. Numerical results show that the proposed BESO method is efficient and convergent and solid-void or bi-material optimal solutions can be achieved for a variety of frequency optimization problems of continuum structures.

  17. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  18. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  19. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization

    OpenAIRE

    San-Yang Liu; Yuan-Yuan Huang

    2014-01-01

    This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent condition ${g}_{k}^{T}{d}_{k}\\le -(\\mathrm{1}-\\mathrm{1}/(\\mathrm{4}{\\theta }_{k})){{\\|}{g}_{k}{\\|}}^{2}  ({\\theta }_{k}>\\mathrm{1}/\\mathrm{4})$ and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimi...

  20. Efficient Optimization Methods for Communication Network Planning and Assessment

    OpenAIRE

    Kiese, Moritz

    2010-01-01

    In this work, we develop efficient mathematical planning methods to design communication networks. First, we examine future technologies for optical backbone networks. As new, more intelligent nodes cause higher dynamics in the transport networks, fast planning methods are required. To this end, we develop a heuristic planning algorithm. The evaluation of the cost-efficiency of new, adapative transmission techniques comprises the second topic of this section. In the second part of this work, ...

  1. MATHEMATICAL OPTIMIZATION METHODS TO ESTABLISH ACTIVE PHASES ON HETEROGENEOUS CATALYSIS: CASE OF BULK TRANSITION METAL SULPHIDES

    Directory of Open Access Journals (Sweden)

    Iván Machín

    2015-03-01

    Full Text Available This paper presents a set of procedures based on mathematical optimization methods to establish optimal active sulphide phases with higher HDS activity. This paper proposes a list of active phases as a guide for orienting the experimental work in the search of new catalysts that permit optimize the HDS process. Studies in this paper establish Co-S, Cr-S, Nb-S and Ni-S systems have the greatest potential to improve HDS activity.

  2. METHODS AND PRINCIPLES OF OPTIMIZATION SPECIFIC TO THE DOMAIN OF EQUIPMENT AND MANUFACTURING PROCESSES

    Directory of Open Access Journals (Sweden)

    Radu Virgil GRIGORIU

    2011-11-01

    Full Text Available The objectives of the industrial products manufacturers are generally oriented to manufacture high quality level products, in less time and with maximum economic efficiency. The achievement of these objectives can be realized, generally, by optimizing the processes and the technological manufacturing equipment parameters. In order to optimize these parameters it is necessary to apply series of optimization methods and principles that allow the identification and establishment of the best solution from a variety of alternatives.

  3. Iterative computation of the optimal H(infinity) norm by using two-Riccati-equation method

    Science.gov (United States)

    Chang, B. C.; Li, X. P.; Yeh, H. H.; Banda, S. S.

    1990-01-01

    The two-Riccati-equation method solution to a standard H(infinity) control problem can be used to characterize all possible stabilizing optimal or suboptimal H(infinity) controllers if the optimal or suboptimal H(infinity) norm is available in the literature. An iterative algorithm for computing the optimal H(infinity) norm is proposed. The algorithm employs fixed-point, double secant and bisection to guarantee a super linear convergence.

  4. Optimal design of a DC MHD pump by simulated annealing method

    Directory of Open Access Journals (Sweden)

    Bouali Khadidja

    2014-01-01

    Full Text Available In this paper a design methodology of a magnetohydrodynamic pump is proposed. The methodology is based on direct interpretation of the design problem as an optimization problem. The simulated annealing method is used for an optimal design of a DC MHD pump. The optimization procedure uses an objective function which can be the minimum of the mass. The constraints are both of geometrics and electromagnetic in type. The obtained results are reported.

  5. A new method for determining the optimal lagged ensemble

    Science.gov (United States)

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  6. GMI Instrument Spin Balance Method, Optimization, Calibration, and Test

    Science.gov (United States)

    Ayari, Laoucet; Kubitschek, Michael; Ashton, Gunnar; Johnston, Steve; Debevec, Dave; Newell, David; Pellicciotti, Joseph

    2014-01-01

    The Global Microwave Imager (GMI) instrument must spin at a constant rate of 32 rpm continuously for the 3 year mission life. Therefore, GMI must be very precisely balanced about the spin axis and CG to maintain stable scan pointing and to minimize disturbances imparted to the spacecraft and attitude control on-orbit. The GMI instrument is part of the core Global Precipitation Measurement (GPM) spacecraft and is used to make calibrated radiometric measurements at multiple microwave frequencies and polarizations. The GPM mission is an international effort managed by the National Aeronautics and Space Administration (NASA) to improve climate, weather, and hydro-meteorological predictions through more accurate and frequent precipitation measurements. Ball Aerospace and Technologies Corporation (BATC) was selected by NASA Goddard Space Flight Center to design, build, and test the GMI instrument. The GMI design has to meet a challenging set of spin balance requirements and had to be brought into simultaneous static and dynamic spin balance after the entire instrument was already assembled and before environmental tests began. The focus of this contribution is on the analytical and test activities undertaken to meet the challenging spin balance requirements of the GMI instrument. The novel process of measuring the residual static and dynamic imbalances with a very high level of accuracy and precision is presented together with the prediction of the optimal balance masses and their locations.

  7. Optimizing methods and dodging pitfalls in microbiome research.

    Science.gov (United States)

    Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle

    2017-05-05

    Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.

  8. Topology optimization analysis based on the direct coupling of the boundary element method and the level set method

    Science.gov (United States)

    Vitório, Paulo Cezar; Leonel, Edson Denner

    2017-10-01

    The structural design must ensure suitable working conditions by attending for safe and economic criteria. However, the optimal solution is not easily available, because these conditions depend on the bodies' dimensions, materials strength and structural system configuration. In this regard, topology optimization aims for achieving the optimal structural geometry, i.e. the shape that leads to the minimum requirement of material, respecting constraints related to the stress state at each material point. The present study applies an evolutionary approach for determining the optimal geometry of 2D structures using the coupling of the boundary element method (BEM) and the level set method (LSM). The proposed algorithm consists of mechanical modelling, topology optimization approach and structural reconstruction. The mechanical model is composed of singular and hyper-singular BEM algebraic equations. The topology optimization is performed through the LSM. Internal and external geometries are evolved by the LS function evaluated at its zero level. The reconstruction process concerns the remeshing. Because the structural boundary moves at each iteration, the body's geometry change and, consequently, a new mesh has to be defined. The proposed algorithm, which is based on the direct coupling of such approaches, introduces internal cavities automatically during the optimization process, according to the intensity of Von Mises stress. The developed optimization model was applied in two benchmarks available in the literature. Good agreement was observed among the results, which demonstrates its efficiency and accuracy.

  9. Promoting lower extremity strength in elite volleyball players: effects of two combined training methods.

    Science.gov (United States)

    Voelzke, Mathias; Stutzig, Norman; Thorhauer, Hans-Alexander; Granacher, Urs

    2012-09-01

    To compare the impact of short term training with resistance plus plyometric training (RT+P) or electromyostimulation plus plyometric training (EMS+P) on explosive force production in elite volleyball players. Sixteen elite volleyball players of the first German division participated in a training study. The participants were randomly assigned to either the RT+P training group (n=8) or the EMS+P training group (n=8). Both groups participated in a 5-week lower extremity exercise program. Pre and post tests included squat jumps (SJ), countermovement jumps (CMJ), and drop jumps (DJ) on a force plate. The three-step reach height (RH) was assessed using a custom-made vertec apparatus. Fifteen m straight and lateral sprint (S15s and S15l) were assessed using photoelectric cells with interims at 5m and 10 m. RT+P training resulted in significant improvements in SJ (+2.3%) and RH (+0.4%) performance. The EMS+P training group showed significant increases in performance of CMJ (+3.8%), DJ (+6.4%), RH (+1.6%), S15l (-3.8%) and after 5m and 10 m of the S15s (-2.6%; -0.5%). The comparison of training-induced changes between the two intervention groups revealed significant differences for the SJ (p=0.023) in favor of RT+P and for the S15s after 5m (p=0.006) in favor of EMS+P. The results indicate that RT+P training is effective in promoting jump performances and EMS+P training increases jump, speed and agility performances of elite volleyball players. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists......, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...

  11. OPTIMIZATION OF I-SECTION PROFILE DESIGN BY THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    Patryk Różyło

    2016-03-01

    Full Text Available This paper discusses the problem of design optimization for an I-section profile. The optimization process was performed using the Abaqus program. The numerical analysis of a strictly static problem was based on the finite element method. The scope of the analysis involved both determination of stresses and displacements in the profile and structure topology optimization. The main focus of the numerical analysis was put on reducing profile volume while maintaining the same load and similar stresses prior to and after optimization. The solution of the optimization problem is just an example of the potential of using this method in combination with the finite element method in the Abaqus environment. Nowadays numerical analysis is the most effective cost-reducing alternative to experimental tests and it enables structure examination by means of a computer.

  12. Box-Behnken design: an alternative for the optimization of analytical methods.

    Science.gov (United States)

    Ferreira, S L C; Bruns, R E; Ferreira, H S; Matos, G D; David, J M; Brandão, G C; da Silva, E G P; Portugal, L A; dos Reis, P S; Souza, A S; dos Santos, W N L

    2007-08-06

    The present paper describes fundamentals, advantages and limitations of the Box-Behnken design (BBD) for the optimization of analytical methods. It establishes also a comparison between this design and composite central, three-level full factorial and Doehlert designs. A detailed study on factors and responses involved during the optimization of analytical systems is also presented. Functions developed for calculation of multiple responses are discussed, including the desirability function, which was proposed by Derringer and Suich in 1980. Concept and evaluation of robustness of analytical methods are also discussed. Finally, descriptions of applications of this technique for optimization of analytical methods are presented.

  13. A new three-dimensional topology optimization method based on moving morphable components (MMCs)

    Science.gov (United States)

    Zhang, Weisheng; Li, Dong; Yuan, Jie; Song, Junfu; Guo, Xu

    2017-04-01

    In the present paper, a new method for solving three-dimensional topology optimization problem is proposed. This method is constructed under the so-called moving morphable components based solution framework. The novel aspect of the proposed method is that a set of structural components is introduced to describe the topology of a three-dimensional structure and the optimal structural topology is found by optimizing the layout of the components explicitly. The standard finite element method with ersatz material is adopted for structural response analysis and the shape sensitivity analysis only need to be carried out along the structural boundary. Compared to the existing methods, the description of structural topology is totally independent of the finite element/finite difference resolution in the proposed solution framework and therefore the number of design variables can be reduced substantially. Some widely investigated benchmark examples, in the three-dimensional topology optimization designs, are presented to demonstrate the effectiveness of the proposed approach.

  14. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  15. A novel optimized LCL-filter designing method for grid connected converter

    DEFF Research Database (Denmark)

    Guohong, Zeng; Rasmussen, Tonny Wederberg; Teodorescu, Remus

    2010-01-01

    This paper presents a new LCL-filters optimized designing method for grid connected voltage source converter. This method is based on the analysis of converter output voltage components and inherent relations among LCL-filter parameters. By introducing an optimizing index of equivalent total...... capacity of all filter components, with clear physical meaning of minimum cost and volume, a set of optimal values of attenuation ratio and inductancesplit- ratio is obtained for deciding all LCL-filter parameters. With this method, filter overall capacity can be minimized while the grid limit of switching...... frequency distortion is fulfilled. Compared to the existing methods, the proposed method contains only four steps without try-and-error process, so it is efficient and easy to implement. Simulation results of a 50kVA grid-connected inverter with two sets of LCL-filter parameters under different optimizing...

  16. Optimization of statistical methods impact on quantitative proteomics data

    NARCIS (Netherlands)

    Pursiheimo, A.; Vehmas, A.P.; Afzal, S.; Suomi, T.; Chand, T.; Strauss, L.; Poutanen, M.; Rokka, A.; Corthals, G.L.; Elo, L.L.

    2015-01-01

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled

  17. Advanced Control Methods for Optimization of Arc Welding

    DEFF Research Database (Denmark)

    Thomsen, J. S.

    Gas Metal Arc Welding (GMAW) is a proces used for joining pieces of metal. Probably, the GMAW process is the most successful and widely used welding method in the industry today. A key issue in welding is the quality of the welds produced. The quality of a weld is influenced by several factors in...

  18. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists...

  19. Kernel method based human model for enhancing interactive evolutionary optimization.

    Science.gov (United States)

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  20. OPTIMIZING THE PAKS METHOD FOR MEASURING AIRBORNE ACROLEIN

    Science.gov (United States)

    Airborne acrolein is produced from the combustion of fuel and tobacco and is of concern due to its potential for respiratory tract irritation and other adverse health effects. DNPH active-sampling is a method widely used for sampling airborne aldehydes and ketones (carbonyls); ...