WorldWideScience

Sample records for extremal optimization methods

  1. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  2. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  3. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    We show how composites with extremal or unusual thermal expansion coefficients can be designed using a numerical topology optimization method. The composites are composed of two different material phases and void. The optimization method is illustrated by designing materials having maximum therma...

  4. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    Composites with extremal or unusual thermal expansion coefficients are designed using a three-phase topology optimization method. The composites are made of two different material phases and a void phase. The topology optimization method consists in finding the distribution of material phases...... materials having maximum directional thermal expansion (thermal actuators), zero isotropic thermal expansion, and negative isotropic thermal expansion. It is shown that materials with effective negative thermal expansion coefficients can be obtained by mixing two phases with positive thermal expansion...

  5. Design of a Fractional Order Frequency PID Controller for an Islanded Microgrid: A Multi-Objective Extremal Optimization Method

    Directory of Open Access Journals (Sweden)

    Huan Wang

    2017-10-01

    Full Text Available Fractional order proportional-integral-derivative(FOPID controllers have attracted increasing attentions recently due to their better control performance than the traditional integer-order proportional-integral-derivative (PID controllers. However, there are only few studies concerning the fractional order control of microgrids based on evolutionary algorithms. From the perspective of multi-objective optimization, this paper presents an effective FOPID based frequency controller design method called MOEO-FOPID for an islanded microgrid by using a Multi-objective extremal optimization (MOEO algorithm to minimize frequency deviation and controller output signal simultaneously in order to improve finally the efficient operation of distributed generations and energy storage devices. Its superiority to nondominated sorting genetic algorithm-II (NSGA-II based FOPID/PID controllers and other recently reported single-objective evolutionary algorithms such as Kriging-based surrogate modeling and real-coded population extremal optimization-based FOPID controllers is demonstrated by the simulation studies on a typical islanded microgrid in terms of the control performance including frequency deviation, deficit grid power, controller output signal and robustness.

  6. Extreme Trust Region Policy Optimization for Active Object Recognition.

    Science.gov (United States)

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  7. Multiobjective optimization of an extremal evolution model

    International Nuclear Information System (INIS)

    Elettreby, M.F.

    2004-09-01

    We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system self-organizes into a critical state. The distributions of the distances between subsequent mutations as well as the distribution of avalanches sizes follow power law. (author)

  8. Adaptive extremal optimization by detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Hamacher, K.

    2007-01-01

    Global optimization is one of the key challenges in computational physics as several problems, e.g. protein structure prediction, the low-energy landscape of atomic clusters, detection of community structures in networks, or model-parameter fitting can be formulated as global optimization problems. Extremal optimization (EO) has become in recent years one particular, successful approach to the global optimization problem. As with almost all other global optimization approaches, EO is driven by an internal dynamics that depends crucially on one or more parameters. Recently, the existence of an optimal scheme for this internal parameter of EO was proven, so as to maximize the performance of the algorithm. However, this proof was not constructive, that is, one cannot use it to deduce the optimal parameter itself a priori. In this study we analyze the dynamics of EO for a test problem (spin glasses). Based on the results we propose an online measure of the performance of EO and a way to use this insight to reformulate the EO algorithm in order to construct optimal values of the internal parameter online without any input by the user. This approach will ultimately allow us to make EO parameter free and thus its application in general global optimization problems much more efficient

  9. Optimal security investments and extreme risk.

    Science.gov (United States)

    Mohtadi, Hamid; Agiwal, Swati

    2012-08-01

    In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. © 2012 Society for Risk Analysis.

  10. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  11. Methods of mathematical optimization

    Science.gov (United States)

    Vanderplaats, G. N.

    The fundamental principles of numerical optimization methods are reviewed, with an emphasis on potential engineering applications. The basic optimization process is described; unconstrained and constrained minimization problems are defined; a general approach to the design of optimization software programs is outlined; and drawings and diagrams are shown for examples involving (1) the conceptual design of an aircraft, (2) the aerodynamic optimization of an airfoil, (3) the design of an automotive-engine connecting rod, and (4) the optimization of a 'ski-jump' to assist aircraft in taking off from a very short ship deck.

  12. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    Science.gov (United States)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  13. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  14. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  15. Practical methods of optimization

    CERN Document Server

    Fletcher, R

    2013-01-01

    Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers rev

  16. Spatial planning via extremal optimization enhanced by cell-based local search

    International Nuclear Information System (INIS)

    Sidiropoulos, Epaminondas

    2014-01-01

    A new treatment is presented for land use planning problems by means of extremal optimization in conjunction to cell-based neighborhood local search. Extremal optimization, inspired by self-organized critical models of evolution has been applied mainly to the solution of classical combinatorial optimization problems. Cell-based local search has been employed by the author elsewhere in problems of spatial resource allocation in combination with genetic algorithms and simulated annealing. In this paper it complements extremal optimization in order to enhance its capacity for a spatial optimization problem. The hybrid method thus formed is compared to methods of the literature on a specific characteristic problem. It yields better results both in terms of objective function values and in terms of compactness. The latter is an important quantity for spatial planning. The present treatment yields significant compactness values as emergent results

  17. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2008-01-01

    Optimization problems arising in practice involve random model parameters. This book features many illustrations, several examples, and applications to concrete problems from engineering and operations research.

  18. Portfolio optimization for heavy-tailed assets: Extreme Risk Index vs. Markowitz

    OpenAIRE

    Mainik, Georg; Mitov, Georgi; Rüschendorf, Ludger

    2015-01-01

    Using daily returns of the S&P 500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study seems to be the first application of extreme value techniques in portfolio management on a large scale. The primary aim of our investigation is the potential of ERI in p...

  19. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  20. Optimal calibration of variable biofuel blend dual-injection engines using sparse Bayesian extreme learning machine and metaheuristic optimization

    International Nuclear Information System (INIS)

    Wong, Ka In; Wong, Pak Kin

    2017-01-01

    Highlights: • A new calibration method is proposed for dual-injection engines under biofuel blends. • Sparse Bayesian extreme learning machine and flower pollination algorithm are employed in the proposed method. • An SI engine is retrofitted for operating under dual-injection strategy. • The proposed method is verified experimentally under the two idle speed conditions. • Comparison with other machine learning methods and optimization algorithms is conducted. - Abstract: Although many combinations of biofuel blends are available in the market, it is more beneficial to vary the ratio of biofuel blends at different engine operating conditions for optimal engine performance. Dual-injection engines have the potential to implement such function. However, while optimal engine calibration is critical for achieving high performance, the use of two injection systems, together with other modern engine technologies, leads the calibration of the dual-injection engines to a very complicated task. Traditional trial-and-error-based calibration approach can no longer be adopted as it would be time-, fuel- and labor-consuming. Therefore, a new and fast calibration method based on sparse Bayesian extreme learning machine (SBELM) and metaheuristic optimization is proposed to optimize the dual-injection engines operating with biofuels. A dual-injection spark-ignition engine fueled with ethanol and gasoline is employed for demonstration purpose. The engine response for various parameters is firstly acquired, and an engine model is then constructed using SBELM. With the engine model, the optimal engine settings are determined based on recently proposed metaheuristic optimization methods. Experimental results validate the optimal settings obtained with the proposed methodology, indicating that the use of machine learning and metaheuristic optimization for dual-injection engine calibration is effective and promising.

  1. Analytical methods of optimization

    CERN Document Server

    Lawden, D F

    2006-01-01

    Suitable for advanced undergraduates and graduate students, this text surveys the classical theory of the calculus of variations. It takes the approach most appropriate for applications to problems of optimizing the behavior of engineering systems. Two of these problem areas have strongly influenced this presentation: the design of the control systems and the choice of rocket trajectories to be followed by terrestrial and extraterrestrial vehicles.Topics include static systems, control systems, additional constraints, the Hamilton-Jacobi equation, and the accessory optimization problem. Prereq

  2. Interactive Nonlinear Multiobjective Optimization Methods

    OpenAIRE

    Miettinen, Kaisa; Hakanen, Jussi; Podkopaev, Dmitry

    2016-01-01

    An overview of interactive methods for solving nonlinear multiobjective optimization problems is given. In interactive methods, the decision maker progressively provides preference information so that the most satisfactory Pareto optimal solution can be found for her or his. The basic features of several methods are introduced and some theoretical results are provided. In addition, references to modifications and applications as well as to other methods are indicated. As the...

  3. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  4. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  5. Optimized Extreme Learning Machine for Power System Transient Stability Prediction Using Synchrophasors

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2015-01-01

    Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.

  6. Optimization methods in structural design

    CERN Document Server

    Rothwell, Alan

    2017-01-01

    This book offers an introduction to numerical optimization methods in structural design. Employing a readily accessible and compact format, the book presents an overview of optimization methods, and equips readers to properly set up optimization problems and interpret the results. A ‘how-to-do-it’ approach is followed throughout, with less emphasis at this stage on mathematical derivations. The book features spreadsheet programs provided in Microsoft Excel, which allow readers to experience optimization ‘hands-on.’ Examples covered include truss structures, columns, beams, reinforced shell structures, stiffened panels and composite laminates. For the last three, a review of relevant analysis methods is included. Exercises, with solutions where appropriate, are also included with each chapter. The book offers a valuable resource for engineering students at the upper undergraduate and postgraduate level, as well as others in the industry and elsewhere who are new to these highly practical techniques.Whi...

  7. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    Science.gov (United States)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  8. A new method of lower extremity immobilization in radiotherapy

    International Nuclear Information System (INIS)

    Zheng, Xuhai; Dai, Tangzhi; Shu, Xiaochuan; Pu, Yuanxue; Feng, Gang; Li, Xuesong; Liao, Dongbiao; Du, Xiaobo

    2012-01-01

    We developed a new method for immobilization of the fix lower extremities by using a thermoplastic mask, a carbon fiber base plate, a customized headrest, and an adjustable angle holder. The lower extremities of 11 patients with lower extremity tumors were immobilized by this method. CT simulation was performed for each patient. For all 11 patients, the device fit was suitable and comfortable and had good reproducibility, which was proven in daily radiotherapy

  9. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Directory of Open Access Journals (Sweden)

    Xiguang Li

    2017-01-01

    Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.

  10. Optimization of Medical Teaching Methods

    Directory of Open Access Journals (Sweden)

    Wang Fei

    2015-12-01

    Full Text Available In order to achieve the goal of medical education, medicine and adapt to changes in the way doctors work, with the rapid medical teaching methods of modern science and technology must be reformed. Based on the current status of teaching in medical colleges method to analyze the formation and development of medical teaching methods, characteristics, about how to achieve optimal medical teaching methods for medical education teachers and management workers comprehensive and thorough change teaching ideas and teaching concepts provide a theoretical basis.

  11. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  12. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  13. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  14. Aero Engine Component Fault Diagnosis Using Multi-Hidden-Layer Extreme Learning Machine with Optimized Structure

    Directory of Open Access Journals (Sweden)

    Shan Pang

    2016-01-01

    Full Text Available A new aero gas turbine engine gas path component fault diagnosis method based on multi-hidden-layer extreme learning machine with optimized structure (OM-ELM was proposed. OM-ELM employs quantum-behaved particle swarm optimization to automatically obtain the optimal network structure according to both the root mean square error on training data set and the norm of output weights. The proposed method is applied to handwritten recognition data set and a gas turbine engine diagnostic application and is compared with basic ELM, multi-hidden-layer ELM, and two state-of-the-art deep learning algorithms: deep belief network and the stacked denoising autoencoder. Results show that, with optimized network structure, OM-ELM obtains better test accuracy in both applications and is more robust to sensor noise. Meanwhile it controls the model complexity and needs far less hidden nodes than multi-hidden-layer ELM, thus saving computer memory and making it more efficient to implement. All these advantages make our method an effective and reliable tool for engine component fault diagnosis tool.

  15. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  16. Adaptive scalarization methods in multiobjective optimization

    CERN Document Server

    Eichfelder, Gabriele

    2008-01-01

    This book presents adaptive solution methods for multiobjective optimization problems based on parameter dependent scalarization approaches. Readers will benefit from the new adaptive methods and ideas for solving multiobjective optimization.

  17. Optimal adaptation to extreme rainfalls in current and future climate

    DEFF Research Database (Denmark)

    Rosbjerg, Dan

    2017-01-01

    . The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate......More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level...... and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level...

  18. Optimal adaptation to extreme rainfalls under climate change

    Science.gov (United States)

    Rosbjerg, Dan

    2017-04-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.

  19. Optimal adaptation to extreme rainfalls in current and future climate

    Science.gov (United States)

    Rosbjerg, Dan

    2017-01-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.

  20. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    Science.gov (United States)

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event

  1. The selective dynamical downscaling method for extreme-wind atlases

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    A selective dynamical downscaling method is developed to obtain extreme-wind atlases for large areas. The method is general, efficient and flexible. The method consists of three steps: (i) identifying storm episodes for a particular area, (ii) downscaling of the storms using mesoscale modelling...... and (iii) post-processing. The post-processing generalizes the winds from the mesoscale modelling to standard conditions, i.e. 10-m height over a homogeneous surface with roughness length of 5 cm. The generalized winds are then used to calculate the 50-year wind using the annual maximum method for each...... mesoscale grid point. The generalization of the mesoscale winds through the post-processing provides a framework for data validation and for applying further the mesoscale extreme winds at specific places using microscale modelling. The results are compared with measurements from two areas with different...

  2. Game theory and extremal optimization for community detection in complex dynamic networks.

    Science.gov (United States)

    Lung, Rodica Ioana; Chira, Camelia; Andreica, Anca

    2014-01-01

    The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.

  3. OPTIMIZATION METHODS AND SEO TOOLS

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2014-06-01

    Full Text Available SEO is the activity of optimizing Web pages or whole sites in order to make them more search engine friendly, thus getting higher positions in search results. Search engine optimization (SEO involves designing, writing, and coding a website in a way that helps to improve the volume and quality of traffic to your website from people using search engines. While Search Engine Optimization is the focus of this booklet, keep in mind that it is one of many marketing techniques. A brief overview of other marketing techniques is provided at the end of this booklet.

  4. Clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Il Bong; Dong, Kyung Rae [Dept. Radiological Technology, Gwangju Health University, Gwangju (Korea, Republic of); Goo, Eun Hoe [Dept. Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2016-11-15

    The purpose of this study was to assess clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans. From January to July 2016, 30 patients (mean age, 68) were studied with lower extremity CTA and lower extremity perfusion CT. 128 channel multi-detector row CT scans were acquired with a CT scanner (SOMATOM Definition Flash, Siemens medical solution, Germany) of lower extremity perfusion CT and lower extremity CTA. Acquired images were reconstructed with 3D workstation (Leonardo, Siemens, Germany). Site of lower extremity arterial occlusive and stenosis lesions were detected superficial femoral artery 36.6%, popliteal artery 23.4%, external iliac artery 16.7%, common femoral artery 13.3%, peroneal artery 10%. The mean total DLP comparison of lower extremity perfusion CT and lower extremity CTA, 650 mGy-cm and 675 mGy-cm, respectively. Lower extremity perfusion CT and lower extremity CTA were realized that were never be two examination that were exactly the same legions. Future through the development of lower extremity perfusion CT soft ware programs suggest possible clinical applications.

  5. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  6. Biologically inspired optimization methods an introduction

    CERN Document Server

    Wahde, M

    2008-01-01

    The advent of rapid, reliable and cheap computing power over the last decades has transformed many, if not most, fields of science and engineering. The multidisciplinary field of optimization is no exception. First of all, with fast computers, researchers and engineers can apply classical optimization methods to problems of larger and larger size. In addition, however, researchers have developed a host of new optimization algorithms that operate in a rather different way than the classical ones, and that allow practitioners to attack optimization problems where the classical methods are either not applicable or simply too costly (in terms of time and other resources) to apply.This book is intended as a course book for introductory courses in stochastic optimization algorithms (in this book, the terms optimization method and optimization algorithm will be used interchangeably), and it has grown from a set of lectures notes used in courses, taught by the author, at the international master programme Complex Ada...

  7. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  8. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    OpenAIRE

    Fu Yu; Mu Jiong; Duan Xu Liang

    2016-01-01

    By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research...

  9. Using qualimetric engineering and extremal analysis to optimize a proton exchange membrane fuel cell stack

    International Nuclear Information System (INIS)

    Besseris, George J.

    2014-01-01

    Highlights: • We consider the optimal configuration of a PEMFC stack. • We utilize qualimetric engineering tools (Taguchi screening, regression analysis). • We achieve analytical solution on a restructured power-law fitting. • We discuss the Pt-cost involvement in the unit and area minimization scope. - Abstract: The optimal configuration of the proton exchange membrane fuel-cell (PEMFC) stack has received attention recently because of its potential use as an isolated energy distributor for household needs. In this work, the original complex problem for generating an optimal PEMFC stack based on the number of cell units connected in series and parallel arrangements as well as on the cell area is revisited. A qualimetric engineering strategy is formulated which is based on quick profiling the PEMFC stack voltage response. Stochastic screening is initiated by employing an L 9 (3 3 ) Taguchi-type OA for partitioning numerically the deterministic expression of the output PEMFC stack voltage such that to facilitate the sizing of the magnitude of the individual effects. The power and current household specifications for the stack system are maintained at the typical settings of 200 W at 12 V, respectively. The minimization of the stack total-area requirement becomes explicit in this work. The relationship of cell voltage against cell area is cast into a power-law model by regression fitting that achieves a coefficient of determination value of 99.99%. Thus, the theoretical formulation simplifies into a non-linear extremal problem with a constrained solution due to a singularity which is solved analytically. The optimal solution requires 22 cell units connected in series where each unit is designed with an area value of 151.4 cm 2 . It is also demonstrated how to visualize the optimal solution using the graphical method of operating lines. The total area of 3270.24 cm 2 becomes a new benchmark for the optimal design of the studied PEMFC stack configuration. It is

  10. Bootstrapping conformal field theories with the extremal functional method.

    Science.gov (United States)

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.

  11. Tax optimization methods of international companies

    OpenAIRE

    Černá, Kateřina

    2015-01-01

    This thesis is focusing on methods of tax optimization of international companies. These international concerns are endeavoring tax minimization. The disparity of the tax systems gives to these companies a possibility of profit and tax base shifting. At first this thesis compares the differences of tax optimization, aggressive tax planning and tax evasion. Among the areas of the optimization methods, which are described in this thesis, belongs tax residention, dividends, royalty payments, tra...

  12. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  13. Intelligent structural optimization: Concept, Model and Methods

    International Nuclear Information System (INIS)

    Lu, Dagang; Wang, Guangyuan; Peng, Zhang

    2002-01-01

    Structural optimization has many characteristics of Soft Design, and so, it is necessary to apply the experience of human experts to solving the uncertain and multidisciplinary optimization problems in large-scale and complex engineering systems. With the development of artificial intelligence (AI) and computational intelligence (CI), the theory of structural optimization is now developing into the direction of intelligent optimization. In this paper, a concept of Intelligent Structural Optimization (ISO) is proposed. And then, a design process model of ISO is put forward in which each design sub-process model are discussed. Finally, the design methods of ISO are presented

  14. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  15. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  16. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier

    Directory of Open Access Journals (Sweden)

    C. V. Subbulakshmi

    2015-01-01

    Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.

  17. A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.

    Science.gov (United States)

    Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong

    2017-01-01

    It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. DESIGN OPTIMIZATION METHOD USED IN MECHANICAL ENGINEERING

    Directory of Open Access Journals (Sweden)

    SCURTU Iacob Liviu

    2016-11-01

    Full Text Available This paper presents an optimization study in mechanical engineering. First part of the research describe the structural optimization method used, followed by the presentation of several optimization studies conducted in recent years. The second part of the paper presents the CAD modelling of an agricultural plough component. The beam of the plough is analysed using finite element method. The plough component is meshed in solid elements, and the load case which mimics the working conditions of agricultural equipment of this are created. The model is prepared to find the optimal structural design, after the FEA study of the model is done. The mass reduction of part is the criterion applied for this optimization study. The end of this research presents the final results and the model optimized shape.

  19. Optimization and Modeling of Extreme Freshwater Discharge from Japanese First-Class River Basins to Coastal Oceans

    Science.gov (United States)

    Kuroki, R.; Yamashiki, Y. A.; Varlamov, S.; Miyazawa, Y.; Gupta, H. V.; Racault, M.; Troselj, J.

    2017-12-01

    We estimated the effects of extreme fluvial outflow events from river mouths on the salinity distribution in the Japanese coastal zones. Targeted extreme event was a typhoon from 06/09/2015 to 12/09/2015, and we generated a set of hourly simulated river outflow data of all Japanese first-class rivers from these basins to the Pacific Ocean and the Sea of Japan during the period by using our model "Cell Distributed Runoff Model Version 3.1.1 (CDRMV3.1.1)". The model simulated fresh water discharges for the case of the typhoon passage over Japan. We used these data with a coupled hydrological-oceanographic model JCOPE-T, developed by Japan Agency for Marine-earth Science and Technology (JAMSTEC), for estimation of the circulation and salinity distribution in Japanese coastal zones. By using the model, the coastal oceanic circulation was reproduced adequately, which was verified by satellite remote sensing. In addition to this, we have successfully optimized 5 parameters, soil roughness coefficient, river roughness coefficient, effective porosity, saturated hydraulic conductivity, and effective rainfall by using Shuffled Complex Evolution method developed by University of Arizona (SCE-UA method), that is one of the optimization method for hydrological model. Increasing accuracy of peak discharge prediction of extreme typhoon events on river mouths is essential for continental-oceanic mutual interaction.

  20. Feature selection in wind speed prediction systems based on a hybrid coral reefs optimizationExtreme learning machine approach

    International Nuclear Information System (INIS)

    Salcedo-Sanz, S.; Pastor-Sánchez, A.; Prieto, L.; Blanco-Aguilera, A.; García-Herrera, R.

    2014-01-01

    Highlights: • A novel approach for short-term wind speed prediction is presented. • The system is formed by a coral reefs optimization algorithm and an extreme learning machine. • Feature selection is carried out with the CRO to improve the ELM performance. • The method is tested in real wind farm data in USA, for the period 2007–2008. - Abstract: This paper presents a novel approach for short-term wind speed prediction based on a Coral Reefs Optimization algorithm (CRO) and an Extreme Learning Machine (ELM), using meteorological predictive variables from a physical model (the Weather Research and Forecast model, WRF). The approach is based on a Feature Selection Problem (FSP) carried out with the CRO, that must obtain a reduced number of predictive variables out of the total available from the WRF. This set of features will be the input of an ELM, that finally provides the wind speed prediction. The CRO is a novel bio-inspired approach, based on the simulation of reef formation and coral reproduction, able to obtain excellent results in optimization problems. On the other hand, the ELM is a new paradigm in neural networks’ training, that provides a robust and extremely fast training of the network. Together, these algorithms are able to successfully solve this problem of feature selection in short-term wind speed prediction. Experiments in a real wind farm in the USA show the excellent performance of the CRO–ELM approach in this FSP wind speed prediction problem

  1. OPTIMIZATION METHODS IN TRANSPORTATION OF FOREST PRODUCTS

    Directory of Open Access Journals (Sweden)

    Selçuk Gümüş

    2008-04-01

    Full Text Available Turkey has total of 21.2 million ha (27 % forest land. In this area, average 9 million m3 of logs and 5 million stere of fuel wood have been annually produced by the government forest enterprises. The total annual production is approximately 13million m3 Considering the fact that the costs of transporting forest products was about . 160 million TL in the year of 2006, the importance of optimizing the total costs in transportation can be better understood. Today, there is not common optimization method used at whole transportation problems. However, the decision makers select the most appropriate methods according to their aims.Comprehending of features and capacity of optimization methods is important for selecting of the most appropriate method. The evaluation of optimization methods that can be used at forest products transportation is aimed in this study.

  2. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    Directory of Open Access Journals (Sweden)

    Fu Yu

    2016-01-01

    Full Text Available By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research thinking concerning extreme learning machine into the economics classification area so as to fulfill the purpose of computerizing the speedy but effective evaluation of massive financial statements of listed companies pertain to different classes

  3. Engineering applications of heuristic multilevel optimization methods

    Science.gov (United States)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  4. Method optimization of ocular patches

    Directory of Open Access Journals (Sweden)

    Kamalesh Upreti

    2012-01-01

    Full Text Available The intraocular patches were prepared using gelatin as the polymer. Ocular patch were prepared by solvent casting method. The patches were prepared for six formulations GP1, GP2, GP3, GP4, GP5 and GP6. Petri dishes were used for formulation of ocular patch. Gelatin was used as a polymer of choice. Glutaraldehyde used as cross linking agent and (DMSO dimethylsulfoxide used as solubility enhancer. The elasticity depends upon the concentration of gelatin. 400 mg amount of polymer i.e gelatin gave the required elasticity for the formulation.

  5. Extreme learning machine based optimal embedding location finder for image steganography.

    Directory of Open Access Journals (Sweden)

    Hayfaa Abdulzahra Atee

    Full Text Available In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM index, fusion matrices, and mean square error (MSE. The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.

  6. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  7. Optimal bounds and extremal trajectories for time averages in dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  8. Evolutionary optimization methods for accelerator design

    Science.gov (United States)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained

  9. Intelligent fault diagnosis of photovoltaic arrays based on optimized kernel extreme learning machine and I-V characteristics

    International Nuclear Information System (INIS)

    Chen, Zhicong; Wu, Lijun; Cheng, Shuying; Lin, Peijie; Wu, Yue; Lin, Wencheng

    2017-01-01

    Highlights: •An improved Simulink based modeling method is proposed for PV modules and arrays. •Key points of I-V curves and PV model parameters are used as the feature variables. •Kernel extreme learning machine (KELM) is explored for PV arrays fault diagnosis. •The parameters of KELM algorithm are optimized by the Nelder-Mead simplex method. •The optimized KELM fault diagnosis model achieves high accuracy and reliability. -- Abstract: Fault diagnosis of photovoltaic (PV) arrays is important for improving the reliability, efficiency and safety of PV power stations, because the PV arrays usually operate in harsh outdoor environment and tend to suffer various faults. Due to the nonlinear output characteristics and varying operating environment of PV arrays, many machine learning based fault diagnosis methods have been proposed. However, there still exist some issues: fault diagnosis performance is still limited due to insufficient monitored information; fault diagnosis models are not efficient to be trained and updated; labeled fault data samples are hard to obtain by field experiments. To address these issues, this paper makes contribution in the following three aspects: (1) based on the key points and model parameters extracted from monitored I-V characteristic curves and environment condition, an effective and efficient feature vector of seven dimensions is proposed as the input of the fault diagnosis model; (2) the emerging kernel based extreme learning machine (KELM), which features extremely fast learning speed and good generalization performance, is utilized to automatically establish the fault diagnosis model. Moreover, the Nelder-Mead Simplex (NMS) optimization method is employed to optimize the KELM parameters which affect the classification performance; (3) an improved accurate Simulink based PV modeling approach is proposed for a laboratory PV array to facilitate the fault simulation and data sample acquisition. Intensive fault experiments are

  10. A topological derivative method for topology optimization

    DEFF Research Database (Denmark)

    Norato, J.; Bendsøe, Martin P.; Haber, RB

    2007-01-01

    resource constraint. A smooth and consistent projection of the region bounded by the level set onto the fictitious analysis domain simplifies the response analysis and enhances the convergence of the optimization algorithm. Moreover, the projection supports the reintroduction of solid material in void......We propose a fictitious domain method for topology optimization in which a level set of the topological derivative field for the cost function identifies the boundary of the optimal design. We describe a fixed-point iteration scheme that implements this optimality criterion subject to a volumetric...... regions, a critical requirement for robust topology optimization. We present several numerical examples that demonstrate compliance minimization of fixed-volume, linearly elastic structures....

  11. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique

    Directory of Open Access Journals (Sweden)

    Manoja Kumar Behera

    2018-06-01

    Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network

  12. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  13. Topology optimization and lattice Boltzmann methods

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian Arlund

    This thesis demonstrates the application of the lattice Boltzmann method for topology optimization problems. Specifically, the focus is on problems in which time-dependent flow dynamics have significant impact on the performance of the devices to be optimized. The thesis introduces new topology...... a discrete adjoint approach. To handle the complexity of the discrete adjoint approach more easily, a method for computing it based on automatic differentiation is introduced, which can be adapted to any lattice Boltzmann type method. For example, while it is derived in the context of an isothermal lattice...... Boltzmann model, it is shown that the method can be easily extended to a thermal model as well. Finally, the predicted behavior of an optimized design is compared to the equiva-lent prediction from a commercial finite element solver. It is found that the weakly compressible nature of the lattice Boltzmann...

  14. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  15. Optimizing How We Teach Research Methods

    Science.gov (United States)

    Cvancara, Kristen E.

    2017-01-01

    Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…

  16. Optimization of breeding methods when introducing multiple ...

    African Journals Online (AJOL)

    Optimization of breeding methods when introducing multiple resistance genes from American to Chinese wheat. JN Qi, X Zhang, C Yin, H Li, F Lin. Abstract. Stripe rust is one of the most destructive diseases of wheat worldwide. Growing resistant cultivars with resistance genes is the most effective method to control this ...

  17. A method optimization study for atomic absorption ...

    African Journals Online (AJOL)

    A sensitive, reliable and relative fast method has been developed for the determination of total zinc in insulin by atomic absorption spectrophotometer. This designed study was used to optimize the procedures for the existing methods. Spectrograms of both standard and sample solutions of zinc were recorded by measuring ...

  18. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    Science.gov (United States)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  19. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  20. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix K and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression c = u^\\T \\tilde{K}u $, where \\tilde{K} is different from K...... the well known Reuss lower bound. [1] Bendsøe, M.P.; Sigmund, O. 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, H. K.; W. Malalasekera 1995: An introduction to Computational Fluid Dynamics: the Finite Volume Method. London: Longman...

  1. Extremity exams optimization for computed radiography; Otimizacao de exames de extremidade para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana Luiza M.; Alves, Allan Felipe F.; Velo, Alexandre F.; Miranda, Jose Ricardo A., E-mail: analuiza@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2013-08-15

    The computed radiography (CR) has become the most used device for image acquisition, since its introduction in the 80s. The detection and early diagnosis, obtained through CR examinations, are important for the successful treatment of diseases of the hand. However, the norms used for optimization of these images are based on international protocols. Therefore, it is necessary to determine letters of radiographic techniques for CR system, which provides a safe medical diagnosis, with doses as low as reasonably achievable. The objective of this work is to develop an extremity homogeneous phantom to be used in the calibration process of radiographic techniques. In the construction process of the simulator, it has been developed a tissues' algorithm quantifier using Matlab®. In this process the average thickness was quantified from bone and soft tissues in the region of the hand of an anthropomorphic simulator as well as the simulators' material thickness corresponding (aluminum and Lucite) using technique of mask application and removal Gaussian histogram corresponding to tissues of interest. The homogeneous phantom was used to calibrate the x-ray beam. The techniques were implemented in a calibrated hand anthropomorphic phantom. The images were evaluated by specialists in radiology by the method of VGA. Skin entrance surface doses were estimated (SED) corresponding to each technique obtained with their respective tube charge. The thicknesses of simulators materials that constitute the homogeneous phantom determined in this study were 19.01 mm of acrylic and 0.81 mm of aluminum. A better picture quality with doses as low as reasonably achievable decreased dose and tube charge around 53.35% and 37.78% respectively, compared normally used by radiology diagnostic routine clinical of HCFMB-UNESP. (author)

  2. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  3. Optimal boarding method for airline passengers

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab

    2008-02-01

    Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method and discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.

  4. Optimization methods applied to hybrid vehicle design

    Science.gov (United States)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  5. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  6. Path optimization method for the sign problem

    Directory of Open Access Journals (Sweden)

    Ohnishi Akira

    2018-01-01

    Full Text Available We propose a path optimization method (POM to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t(f ϵ R and by optimizing f(t to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  7. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  8. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro; Nochetto, Ricardo H.; Pauletti, Miguel S.; Verani, Marco

    2012-01-01

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  9. Topology optimization using the finite volume method

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Bendsøe, Martin P.; Sigmund, Ole

    2005-01-01

    in this presentation is focused on a prototype model for topology optimization of steady heat diffusion. This allows for a study of the basic ingredients in working with FVM methods when dealing with topology optimization problems. The FVM and FEM based formulations differ both in how one computes the design...... derivative of the system matrix $\\mathbf K$ and in how one computes the discretized version of certain objective functions. Thus for a cost function for minimum dissipated energy (like minimum compliance for an elastic structure) one obtains an expression $ c = \\mathbf u^\\T \\tilde{\\mathbf K} \\mathbf u...... the arithmetic and harmonic average with the latter being the well known Reuss lower bound. [1] Bendsøe, MP and Sigmund, O 2004: Topology Optimization - Theory, Methods, and Applications. Berlin Heidelberg: Springer Verlag [2] Versteeg, HK and Malalasekera, W 1995: An introduction to Computational Fluid Dynamics...

  10. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  11. Optimized method for manufacturing large aspheric surfaces

    Science.gov (United States)

    Zhou, Xusheng; Li, Shengyi; Dai, Yifan; Xie, Xuhui

    2007-12-01

    Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

  12. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  13. A Gradient Taguchi Method for Engineering Optimization

    Science.gov (United States)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  14. Computerized method for rapid optimization of immunoassays

    International Nuclear Information System (INIS)

    Rousseau, F.; Forest, J.C.

    1990-01-01

    The authors have developed an one step quantitative method for radioimmunoassay optimization. The method is rapid and necessitates only to perform a series of saturation curves with different titres of the antiserum. After calculating the saturation point at several antiserum titres using the Scatchard plot, the authors have produced a table that predicts the main characteristics of the standard curve (Bo/T, Bo and T) that will prevail for any combination of antiserum titre and percentage of sites saturation. The authors have developed a microcomputer program able to interpolate all the data needed to produce such a table from the results of the saturation curves. This computer program permits also to predict the sensitivity of the assay at any experimental conditions if the antibody does not discriminate between the labeled and the non labeled antigen. The authors have tested the accuracy of this optimization table with two in house RIA systems: 17-β-estradiol, and hLH. The results obtained experimentally, including sensitivity determinations, were concordant with those predicted from the optimization table. This method accerelates and improves greatly the process of optimization of radioimmunoassays [fr

  15. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  16. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  17. Optimizing image quality and dose for digital radiography of distal pediatric extremities using the contrast-to-noise ratio

    International Nuclear Information System (INIS)

    Hess, R.; Neitzel, U.

    2012-01-01

    Purpose: To investigate the influence of X-ray tube voltage and filtration on image quality in terms of contrast-to-noise ratio (CNR) and dose for digital radiography of distal pediatric extremities and to determine conditions that give the best balance of CNR and patient dose. Materials and Methods: In a phantom study simulating the absorption properties of distal extremities, the CNR and the related patient dose were determined as a function of tube voltage in the range 40 - 66 kV, both with and without additional filtration of 0.1 mm Cu/1 mm Al. The measured CNR was used as an indicator of image quality, while the mean absorbed dose (MAD) - determined by a combination of measurement and simulation - was used as an indicator of the patient dose. Results: The most favorable relation of CNR and dose was found for the lowest tube voltage investigated (40 kV) without additional filtration. Compared to a situation with 50 kV or 60 kV, the mean absorbed dose could be lowered by 24 % and 50 %, respectively, while keeping the image quality (CNR) at the same level. Conclusion: For digital radiography of distal pediatric extremities, further CNR and dose optimization appears to be possible using lower tube voltages. Further clinical investigation of the suggested parameters is necessary. (orig.)

  18. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    International Nuclear Information System (INIS)

    Daily, Jeffrey A.

    2015-01-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore's law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or 'homologous') on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K

  19. Scalable Parallel Methods for Analyzing Metagenomics Data at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Daily, Jeffrey A. [Washington State Univ., Pullman, WA (United States)

    2015-05-01

    The field of bioinformatics and computational biology is currently experiencing a data revolution. The exciting prospect of making fundamental biological discoveries is fueling the rapid development and deployment of numerous cost-effective, high-throughput next-generation sequencing technologies. The result is that the DNA and protein sequence repositories are being bombarded with new sequence information. Databases are continuing to report a Moore’s law-like growth trajectory in their database sizes, roughly doubling every 18 months. In what seems to be a paradigm-shift, individual projects are now capable of generating billions of raw sequence data that need to be analyzed in the presence of already annotated sequence information. While it is clear that data-driven methods, such as sequencing homology detection, are becoming the mainstay in the field of computational life sciences, the algorithmic advancements essential for implementing complex data analytics at scale have mostly lagged behind. Sequence homology detection is central to a number of bioinformatics applications including genome sequencing and protein family characterization. Given millions of sequences, the goal is to identify all pairs of sequences that are highly similar (or “homologous”) on the basis of alignment criteria. While there are optimal alignment algorithms to compute pairwise homology, their deployment for large-scale is currently not feasible; instead, heuristic methods are used at the expense of quality. In this dissertation, we present the design and evaluation of a parallel implementation for conducting optimal homology detection on distributed memory supercomputers. Our approach uses a combination of techniques from asynchronous load balancing (viz. work stealing, dynamic task counters), data replication, and exact-matching filters to achieve homology detection at scale. Results for a collection of 2.56M sequences show parallel efficiencies of ~75-100% on up to 8K cores

  20. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Nataša Krejić

    2014-12-01

    Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

  1. Methods for Distributed Optimal Energy Management

    DEFF Research Database (Denmark)

    Brehm, Robert

    The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast to convent......The presented research deals with the fundamental underlying methods and concepts of how the growing number of distributed generation units based on renewable energy resources and distributed storage devices can be most efficiently integrated into the existing utility grid. In contrast...... to conventional centralised optimal energy flow management systems, here-in, focus is set on how optimal energy management can be achieved in a decentralised distributed architecture such as a multi-agent system. Distributed optimisation methods are introduced, targeting optimisation of energy flow in virtual......-consumption of renewable energy resources in low voltage grids. It can be shown that this method prevents mutual discharging of batteries and prevents peak loads, a supervisory control instance can dictate the level of autarchy from the utility grid. Further it is shown that the problem of optimal energy flow management...

  2. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  3. On the universal method to solve extremal problems

    NARCIS (Netherlands)

    J. Brinkhuis (Jan)

    2005-01-01

    textabstractSome applications of the theory of extremal problems to mathematics and economics are made more accessible to non-experts. 1.The following fundamental results are known to all users of mathematical techniques, such as economist, econometricians, engineers and ecologists: the fundamental

  4. Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2018-01-01

    Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...

  5. Layout optimization with algebraic multigrid methods

    Science.gov (United States)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  6. Quality control methods in accelerometer data processing: identifying extreme counts.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available Accelerometers are designed to measure plausible human activity, however extremely high count values (EHCV have been recorded in large-scale studies. Using population data, we develop methodological principles for establishing an EHCV threshold, propose a threshold to define EHCV in the ActiGraph GT1M, determine occurrences of EHCV in a large-scale study, identify device-specific error values, and investigate the influence of varying EHCV thresholds on daily vigorous PA (VPA.We estimated quantiles to analyse the distribution of all accelerometer positive count values obtained from 9005 seven-year old children participating in the UK Millennium Cohort Study. A threshold to identify EHCV was derived by differentiating the quantile function. Data were screened for device-specific error count values and EHCV, and a sensitivity analysis conducted to compare daily VPA estimates using three approaches to accounting for EHCV.Using our proposed threshold of ≥ 11,715 counts/minute to identify EHCV, we found that only 0.7% of all non-zero counts measured in MCS children were EHCV; in 99.7% of these children, EHCV comprised < 1% of total non-zero counts. Only 11 MCS children (0.12% of sample returned accelerometers that contained negative counts; out of 237 such values, 211 counts were equal to -32,768 in one child. The medians of daily minutes spent in VPA obtained without excluding EHCV, and when using a higher threshold (≥19,442 counts/minute were, respectively, 6.2% and 4.6% higher than when using our threshold (6.5 minutes; p<0.0001.Quality control processes should be undertaken during accelerometer fieldwork and prior to analysing data to identify monitors recording error values and EHCV. The proposed threshold will improve the validity of VPA estimates in children's studies using the ActiGraph GT1M by ensuring only plausible data are analysed. These methods can be applied to define appropriate EHCV thresholds for different accelerometer models.

  7. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  8. Layout optimization using the homogenization method

    Science.gov (United States)

    Suzuki, Katsuyuki; Kikuchi, Noboru

    1993-01-01

    A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.

  9. Hydrothermal optimal power flow using continuation method

    International Nuclear Information System (INIS)

    Raoofat, M.; Seifi, H.

    2001-01-01

    The problem of optimal economic operation of hydrothermal electric power systems is solved using powerful continuation method. While in conventional approach, fixed generation voltages are used to avoid convergence problems, in the algorithm, they are treated as variables so that better solutions can be obtained. The algorithm is tested for a typical 5-bus and 17-bus New Zealand networks. Its capabilities and promising results are assessed

  10. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  11. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  12. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  13. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    example, this research proved the sustainability of the proposed integrated optimization parameters of transport systems. This approach could be applied not only for MTS, but also for other transport systems. Originality. The bases of the complex optimization of transport presented are the new system of universal scientific methods and approaches that ensure high accuracy and authenticity of calculations with the simulation of transport systems and transport networks taking into account the dynamics of their development. Practical value. The development of the theoretical and technological bases of conducting the complex optimization of transport makes it possible to create the scientific tool, which ensures the fulfillment of the automated simulation and calculating of technical and economic structure and technology of the work of different objects of transport, including its infrastructure.

  14. Optimal management strategies in variable environments: Stochastic optimal control methods

    Science.gov (United States)

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  15. Global optimization methods for engineering design

    Science.gov (United States)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  16. Optimization and photomodification of extremely broadband optical response of plasmonic core-shell obscurants.

    Science.gov (United States)

    de Silva, Vashista C; Nyga, Piotr; Drachev, Vladimir P

    2016-12-15

    Plasmonic resonances of the metallic shells depend on their nanostructure and geometry of the core, which can be optimized for the broadband extinction normalized by mass. The fractal nanostructures can provide a broadband extinction. It allows as well for a laser photoburning of holes in the extinction spectra and consequently windows of transparency in a controlled manner. The studied core-shell microparticles synthesized using colloidal chemistry consist of gold fractal nanostructures grown on precipitated calcium carbonate (PCC) microparticles or silica (SiO 2 ) microspheres. The optimization includes different core sizes and shapes, and shell nanostructures. It shows that the rich surface of the PCC flakes is the best core for the fractal shells providing the highest mass normalized extinction over the extremely broad spectral range. The mass normalized extinction cross section up to 3m 2 /g has been demonstrated in the broad spectral range from the visible to mid-infrared. Essentially, the broadband response is a characteristic feature of each core-shell microparticle in contrast to a combination of several structures resonant at different wavelengths, for example nanorods with different aspect ratios. The photomodification at an IR wavelength makes the window of transparency at the longer wavelength side. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  18. Hybrid intelligent optimization methods for engineering problems

    Science.gov (United States)

    Pehlivanoglu, Yasin Volkan

    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and

  19. The methods and applications of optimization of radiation protection

    International Nuclear Information System (INIS)

    Liu Hua

    2007-01-01

    Optimization is the most important principle in radiation protection. The present article briefs the concept and up-to-date progress of optimization of protection, introduces some methods used in current optimization analysis, and presents various applications of optimization of protection. The author emphasizes that optimization of protection is a forward-looking iterative process aimed at preventing exposures before they occur. (author)

  20. Extremely Efficient Design of Organic Thin Film Solar Cells via Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Mine Kaya

    2017-11-01

    Full Text Available Design of efficient thin film photovoltaic (PV cells require optical power absorption to be computed inside a nano-scale structure of photovoltaics, dielectric and plasmonic materials. Calculating power absorption requires Maxwell’s electromagnetic equations which are solved using numerical methods, such as finite difference time domain (FDTD. The computational cost of thin film PV cell design and optimization is therefore cumbersome, due to successive FDTD simulations. This cost can be reduced using a surrogate-based optimization procedure. In this study, we deploy neural networks (NNs to model optical absorption in organic PV structures. We use the corresponding surrogate-based optimization procedure to maximize light trapping inside thin film organic cells infused with metallic particles. Metallic particles are known to induce plasmonic effects at the metal–semiconductor interface, thus increasing absorption. However, a rigorous design procedure is required to achieve the best performance within known design guidelines. As a result of using NNs to model thin film solar absorption, the required time to complete optimization is decreased by more than five times. The obtained NN model is found to be very reliable. The optimization procedure results in absorption enhancement greater than 200%. Furthermore, we demonstrate that once a reliable surrogate model such as the developed NN is available, it can be used for alternative analyses on the proposed design, such as uncertainty analysis (e.g., fabrication error.

  1. Method for the protection of extreme ultraviolet lithography optics

    Science.gov (United States)

    Grunow, Philip A.; Clift, Wayne M.; Klebanoff, Leonard E.

    2010-06-22

    A coating for the protection of optical surfaces exposed to a high energy erosive plasma. A gas that can be decomposed by the high energy plasma, such as the xenon plasma used for extreme ultraviolet lithography (EUVL), is injected into the EUVL machine. The decomposition products coat the optical surfaces with a protective coating maintained at less than about 100 .ANG. thick by periodic injections of the gas. Gases that can be used include hydrocarbon gases, particularly methane, PH.sub.3 and H.sub.2S. The use of PH.sub.3 and H.sub.2S is particularly advantageous since films of the plasma-induced decomposition products S and P cannot grow to greater than 10 .ANG. thick in a vacuum atmosphere such as found in an EUVL machine.

  2. Circular SAR Optimization Imaging Method of Buildings

    Directory of Open Access Journals (Sweden)

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  3. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  4. Optimization of offshore wind turbine support structures using analytical gradient-based method

    OpenAIRE

    Chew, Kok Hon; Tai, Kang; Ng, E.Y.K.; Muskulus, Michael

    2015-01-01

    Design optimization of the offshore wind turbine support structure is an expensive task; due to the highly-constrained, non-convex and non-linear nature of the design problem. This report presents an analytical gradient-based method to solve this problem in an efficient and effective way. The design sensitivities of the objective and constraint functions are evaluated analytically while the optimization of the structure is performed, subject to sizing, eigenfrequency, extreme load an...

  5. Verifying a computational method for predicting extreme ground motion

    Science.gov (United States)

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  6. On Best Practice Optimization Methods in R

    Directory of Open Access Journals (Sweden)

    John C. Nash

    2014-09-01

    Full Text Available R (R Core Team 2014 provides a powerful and flexible system for statistical computations. It has a default-install set of functionality that can be expanded by the use of several thousand add-in packages as well as user-written scripts. While R is itself a programming language, it has proven relatively easy to incorporate programs in other languages, particularly Fortran and C. Success, however, can lead to its own costs: • Users face a confusion of choice when trying to select packages in approaching a problem. • A need to maintain workable examples using early methods may mean some tools offered as a default may be dated. • In an open-source project like R, how to decide what tools offer "best practice" choices, and how to implement such a policy, present a serious challenge. We discuss these issues with reference to the tools in R for nonlinear parameter estimation (NLPE and optimization, though for the present article `optimization` will be limited to function minimization of essentially smooth functions with at most bounds constraints on the parameters. We will abbreviate this class of problems as NLPE. We believe that the concepts proposed are transferable to other classes of problems seen by R users.

  7. Technique optimization of orbital atherectomy in calcified peripheral lesions of the lower extremities: the CONFIRM series, a prospective multicenter registry.

    Science.gov (United States)

    Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L

    2014-01-01

    The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.

  8. Inter-comparison of statistical downscaling methods for projection of extreme flow indices across Europe

    DEFF Research Database (Denmark)

    Hundecha, Yeshewatesfa; Sunyer Pinya, Maria Antonia; Lawrence, Deborah

    2016-01-01

    The effect of methods of statistical downscaling of daily precipitation on changes in extreme flow indices under a plausible future climate change scenario was investigated in 11 catchments selected from 9 countries in different parts of Europe. The catchments vary from 67 to 6171km2 in size...... catchments to simulate daily runoff. A set of flood indices were derived from daily flows and their changes have been evaluated by comparing their values derived from simulations corresponding to the current and future climate. Most of the implemented downscaling methods project an increase in the extreme...... flow indices in most of the catchments. The catchments where the extremes are expected to increase have a rainfall-dominated flood regime. In these catchments, the downscaling methods also project an increase in the extreme precipitation in the seasons when the extreme flows occur. In catchments where...

  9. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  10. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  11. Structural Optimization Design of Horizontal-Axis Wind Turbine Blades Using a Particle Swarm Optimization Algorithm and Finite Element Method

    Directory of Open Access Journals (Sweden)

    Pan Pan

    2012-11-01

    Full Text Available This paper presents an optimization method for the structural design of horizontal-axis wind turbine (HAWT blades based on the particle swarm optimization algorithm (PSO combined with the finite element method (FEM. The main goal is to create an optimization tool and to demonstrate the potential improvements that could be brought to the structural design of HAWT blades. A multi-criteria constrained optimization design model pursued with respect to minimum mass of the blade is developed. The number and the location of layers in the spar cap and the positions of the shear webs are employed as the design variables, while the strain limit, blade/tower clearance limit and vibration limit are taken into account as the constraint conditions. The optimization of the design of a commercial 1.5 MW HAWT blade is carried out by combining the above method and design model under ultimate (extreme flap-wise load conditions. The optimization results are described and compared with the original design. It shows that the method used in this study is efficient and produces improved designs.

  12. Numerical methods and optimization a consumer guide

    CERN Document Server

    Walter, Éric

    2014-01-01

    Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. Numerical Methods and Optimization – A Consumer Guide presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to ·         discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; ·         understand the principles behind recognized algorithms used in state-of-the-art numerical software; ·         learn the advantag...

  13. Methods for the design and optimization of shaped tokamaks

    International Nuclear Information System (INIS)

    Haney, S.W.

    1988-05-01

    Two major questions associated with the design and optimization of shaped tokamaks are considered. How do physics and engineering constraints affect the design of shaped tokamaks? How can the process of designing shaped tokamaks be improved? The first question is addressed with the aid of a completely analytical procedure for optimizing the design of a resistive-magnet tokamak reactor. It is shown that physics constraints---particularly the MHD beta limits and the Murakami density limit---have an enormous, and sometimes, unexpected effect on the final design. The second question is addressed through the development of a series of computer models for calculating plasma equilibria, estimating poloidal field coil currents, and analyzing axisymmetric MHD stability in the presence of resistive conductors and feedback. The models offer potential advantages over conventional methods since they are characterized by extremely fast computer execution times, simplicity, and robustness. Furthermore, evidence is presented that suggests that very little loss of accuracy is required to achieve these desirable features. 94 refs., 66 figs., 14 tabs

  14. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  15. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    International Nuclear Information System (INIS)

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a ∼55 cm source-to-detector distance; 1.3 magnification; a 20

  16. Optimization of bioethanol production from carbohydrate rich wastes by extreme thermophilic microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Tomas, A.F.

    2013-05-15

    Second-generation bioethanol is produced from residual biomass such as industrial and municipal waste or agricultural and forestry residues. However, Saccharomyces cerevisiae, the microorganism currently used in industrial first-generation bioethanol production, is not capable of converting all of the carbohydrates present in these complex substrates into ethanol. This is in particular true for pentose sugars such as xylose, generally the second major sugar present in lignocellulosic biomass. The transition of second-generation bioethanol production from pilot to industrial scale is hindered by the recalcitrance of the lignocellulosic biomass, and by the lack of a microorganism capable of converting this feedstock to bioethanol with high yield, efficiency and productivity. In this study, a new extreme thermophilic ethanologenic bacterium was isolated from household waste. When assessed for ethanol production from xylose, an ethanol yield of 1.39 mol mol-1 xylose was obtained. This represents 83 % of the theoretical ethanol yield from xylose and is to date the highest reported value for a native, not genetically modified microorganism. The bacterium was identified as a new member of the genus Thermoanaerobacter, named Thermoanaerobacter pentosaceus and was subsequently used to investigate some of the factors that influence secondgeneration bioethanol production, such as initial substrate concentration and sensitivity to inhibitors. Furthermore, T. pentosaceus was used to develop and optimize bioethanol production from lignocellulosic biomass using a range of different approaches, including combination with other microorganisms and immobilization of the cells. T. pentosaceus could produce ethanol from a wide range of substrates without the addition of nutrients such as yeast extract and vitamins to the medium. It was initially sensitive to concentrations of 10 g l-1 of xylose and 1 % (v/v) ethanol. However, long term repeated batch cultivation showed that the strain

  17. Models and Methods for Free Material Optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot

    Free Material Optimization (FMO) is a powerful approach for structural optimization in which the design parametrization allows the entire elastic stiffness tensor to vary freely at each point of the design domain. The only requirement imposed on the stiffness tensor lies on its mild necessary...

  18. Adjoint Optimization of a Wing Using the CSRT Method

    NARCIS (Netherlands)

    Straathof, M.H.; Van Tooren, M.J.L.

    2011-01-01

    This paper will demonstrate the potential of the Class-Shape-Refinement-Transformation (CSRT) method for aerodynamically optimizing three-dimensional surfaces. The CSRT method was coupled to an in-house Euler solver and this combination was used in an optimization framework to optimize the ONERA M6

  19. A new optimal seam method for seamless image stitching

    Science.gov (United States)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  20. A Review of Design Optimization Methods for Electrical Machines

    Directory of Open Access Journals (Sweden)

    Gang Lei

    2017-11-01

    Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.

  1. Method of optimization onboard communication network

    Science.gov (United States)

    Platoshin, G. A.; Selvesuk, N. I.; Semenov, M. E.; Novikov, V. M.

    2018-02-01

    In this article the optimization levels of onboard communication network (OCN) are proposed. We defined the basic parameters, which are necessary for the evaluation and comparison of modern OCN, we identified also a set of initial data for possible modeling of the OCN. We also proposed a mathematical technique for implementing the OCN optimization procedure. This technique is based on the principles and ideas of binary programming. It is shown that the binary programming technique allows to obtain an inherently optimal solution for the avionics tasks. An example of the proposed approach implementation to the problem of devices assignment in OCN is considered.

  2. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  3. A method for aggregating external operating conditions in multi-generation system optimization models

    DEFF Research Database (Denmark)

    Lythcke-Jørgensen, Christoffer Ernst; Münster, Marie; Ensinas, Adriano Viana

    2016-01-01

    This paper presents a novel, simple method for reducing external operating condition datasets to be used in multi-generation system optimization models. The method, called the Characteristic Operating Pattern (CHOP) method, is a visually-based aggregation method that clusters reference data based...... on parameter values rather than time of occurrence, thereby preserving important information on short-term relations between the relevant operating parameters. This is opposed to commonly used methods where data are averaged over chronological periods (months or years), and extreme conditions are hidden...... in the averaged values. The CHOP method is tested in a case study where the operation of a fictive Danish combined heat and power plant is optimized over a historical 5-year period. The optimization model is solved using the full external operating condition dataset, a reduced dataset obtained using the CHOP...

  4. A simple method to optimize HMC performance

    CERN Document Server

    Bussone, Andrea; Drach, Vincent; Hansen, Martin; Hietanen, Ari; Rantaharju, Jarno; Pica, Claudio

    2016-01-01

    We present a practical strategy to optimize a set of Hybrid Monte Carlo parameters in simulations of QCD and QCD-like theories. We specialize to the case of mass-preconditioning, with multiple time-step Omelyan integrators. Starting from properties of the shadow Hamiltonian we show how the optimal setup for the integrator can be chosen once the forces and their variances are measured, assuming that those only depend on the mass-preconditioning parameter.

  5. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  6. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  7. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  8. Optimization of an on-board imaging system for extremely rapid radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Maxim, Peter G.; Loo, Billy W. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2015-11-15

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  9. Optimization of an on-board imaging system for extremely rapid radiation therapy

    International Nuclear Information System (INIS)

    Cherry Kemmerling, Erica M.; Wu, Meng; Yang, He; Fahrig, Rebecca; Maxim, Peter G.; Loo, Billy W.

    2015-01-01

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  10. Review of design optimization methods for turbomachinery aerodynamics

    Science.gov (United States)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  11. A Method for Determining Optimal Residential Energy Efficiency Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2011-04-01

    This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.

  12. An Optimization Method of Passenger Assignment for Customized Bus

    OpenAIRE

    Yang Cao; Jian Wang

    2017-01-01

    This study proposes an optimization method of passenger assignment on customized buses (CB). Our proposed method guarantees benefits to passengers by balancing the elements of travel time, waiting time, delay, and economic cost. The optimization problem was solved using a Branch and Bound (B&B) algorithm based on the shortest path for the selected stations. A simulation-based evaluation of the proposed optimization method was conducted. We find that a CB service can save 38.33% in average tra...

  13. Optimal reload and depletion method for pressurized water reactors

    International Nuclear Information System (INIS)

    Ahn, D.H.

    1984-01-01

    A new method has been developed to automatically reload and deplete a PWR so that both the enriched inventory requirements during the reactor cycle and the cost of reloading the core are minimized. This is achieved through four stepwise optimization calculations: 1) determination of the minimum fuel requirement for an equivalent three-region core model, 2) optimal selection and allocation of fuel requirement for an equivalent three-region core model, 2) optimal selection and allocation of fuel assemblies for each of the three regions to minimize the cost of the fresh reload fuel, 3) optimal placement of fuel assemblies to conserve regionwise optimal conditions and 4) optimal control through poison management to deplete individual fuel assemblies to maximize EOC k/sub eff/. Optimizing the fuel cost of reloading and depleting a PWR reactor cycle requires solutions to two separate optimization calculations. One of these minimizes the enriched fuel inventory in the core by optimizing the EOC k/sub eff/. The other minimizes the cost of the fresh reload cost. Both of these optimization calculations have now been combined to provide a new method for performing an automatic optimal reload of PWR's. The new method differs from previous methods in that the optimization process performs all tasks required to reload and deplete a PWR

  14. Augmented Lagrangian Method For Discretized Optimal Control ...

    African Journals Online (AJOL)

    In this paper, we are concerned with one-dimensional time invariant optimal control problem, whose objective function is quadratic and the dynamical system is a differential equation with initial condition .Since most real life problems are nonlinear and their analytical solutions are not readily available, we resolve to ...

  15. METHOD FOR OPTIMIZING THE ENERGY OF PUMPS

    NARCIS (Netherlands)

    Skovmose Kallesøe, Carsten; De Persis, Claudio

    2013-01-01

    The device for energy-optimization on operation of several centrifugal pumps controlled in rotational speed, in a hydraulic installation, begins firstly with determining which pumps as pilot pumps are assigned directly to a consumer and which pumps are hydraulically connected in series upstream of

  16. State space Newton's method for topology optimization

    DEFF Research Database (Denmark)

    Evgrafov, Anton

    2014-01-01

    /10/1-type constraints on the design field through penalties in many topology optimization approaches. We test the algorithm on the benchmark problems of dissipated power minimization for Stokes flows, and in all cases the algorithm outperforms the traditional first order reduced space/nested approaches...

  17. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Y.; Borland, Michael

    2017-06-25

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  18. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  19. Choice of optimal working fluid for binary power plants at extremely low temperature brine

    Science.gov (United States)

    Tomarov, G. V.; Shipkov, A. A.; Sorokina, E. V.

    2016-12-01

    The geothermal energy development problems based on using binary power plants utilizing lowpotential geothermal resources are considered. It is shown that one of the possible ways of increasing the efficiency of heat utilization of geothermal brine in a wide temperature range is the use of multistage power systems with series-connected binary power plants based on incremental primary energy conversion. Some practically significant results of design-analytical investigations of physicochemical properties of various organic substances and their influence on the main parameters of the flowsheet and the technical and operational characteristics of heat-mechanical and heat-exchange equipment for binary power plant operating on extremely-low temperature geothermal brine (70°C) are presented. The calculation results of geothermal brine specific flow rate, capacity (net), and other operation characteristics of binary power plants with the capacity of 2.5 MW at using various organic substances are a practical interest. It is shown that the working fluid selection significantly influences on the parameters of the flowsheet and the operational characteristics of the binary power plant, and the problem of selection of working fluid is in the search for compromise based on the priorities in the field of efficiency, safety, and ecology criteria of a binary power plant. It is proposed in the investigations on the working fluid selection of the binary plant to use the plotting method of multiaxis complex diagrams of relative parameters and characteristic of binary power plants. Some examples of plotting and analyzing these diagrams intended to choose the working fluid provided that the efficiency of geothermal brine is taken as main priority.

  20. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  1. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Hundecha, Y.; Lawrence, D.

    2015-01-01

    Information on extreme precipitation for future climate is needed to assess the changes in the frequency and intensity of flooding. The primary source of information in climate change impact studies is climate model projections. However, due to the coarse resolution and biases of these models......), three are bias correction (BC) methods, and one is a perfect prognosis method. The eight methods are used to downscale precipitation output from 15 regional climate models (RCMs) from the ENSEMBLES project for 11 catchments in Europe. The overall results point to an increase in extreme precipitation...... that at least 30% and up to approximately half of the total variance is derived from the SDMs. This study illustrates the large variability in the expected changes in extreme precipitation and highlights the need for considering an ensemble of both SDMs and climate models. Recommendations are provided...

  2. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  3. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for mechanical problems, but has rapidly extended to many other disciplines, such as fluid dynamics and biomechanical problems. However, the novelty and improvements of optimization methods has been very limited. It is, indeed, necessary to develop of new optimization methods to improve the final designs......, and at the same time, reduce the number of function evaluations. Nonlinear optimization methods, such as sequential quadratic programming and interior point solvers, have almost not been embraced by the topology optimization community. Thus, this work is focused on the introduction of this kind of second...... for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...

  4. Primal Interior-Point Method for Large Sparse Minimax Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034

  5. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  6. Review of dynamic optimization methods in renewable natural resource management

    Science.gov (United States)

    Williams, B.K.

    1989-01-01

    In recent years, the applications of dynamic optimization procedures in natural resource management have proliferated. A systematic review of these applications is given in terms of a number of optimization methodologies and natural resource systems. The applicability of the methods to renewable natural resource systems are compared in terms of system complexity, system size, and precision of the optimal solutions. Recommendations are made concerning the appropriate methods for certain kinds of biological resource problems.

  7. Danish extreme wind atlas: Background and methods for a WAsP engineering option

    Energy Technology Data Exchange (ETDEWEB)

    Rathmann, O; Kristensen, L; Mann, J [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark); Hansen, S O [Svend Ole Hansen ApS, Copenhagen (Denmark)

    1999-03-01

    Extreme wind statistics is necessary design information when establishing wind farms and erecting bridges, buildings and other structures in the open air. Normal mean wind statistics in terms of directional and speed distribution may be estimated by wind atlas methods and are used to estimate e.g. annual energy output for wind turbines. It is the purpose of the present work to extend the wind atlas method to also include the local extreme wind statistics so that an extreme value as e.g. the 50-year wind can be estimated at locations of interest. Together with turbulence estimates such information is important regarding the necessary strength of wind turbines or structures to withstand high wind loads. In the `WAsP Engineering` computer program a flow model, which includes a model for the dynamic roughness of water surfaces, is used to realise such an extended wind atlas method. With basis in an extended wind atlas, also containing extreme wind statistics, this allows the program to estimate extreme winds in addition to mean winds and turbulence intensities at specified positions and heights. (au) EFP-97. 15 refs.

  8. Optimization and control methods in industrial engineering and construction

    CERN Document Server

    Wang, Xiangyu

    2014-01-01

    This book presents recent advances in optimization and control methods with applications to industrial engineering and construction management. It consists of 15 chapters authored by recognized experts in a variety of fields including control and operation research, industrial engineering, and project management. Topics include numerical methods in unconstrained optimization, robust optimal control problems, set splitting problems, optimum confidence interval analysis, a monitoring networks optimization survey, distributed fault detection, nonferrous industrial optimization approaches, neural networks in traffic flows, economic scheduling of CCHP systems, a project scheduling optimization survey, lean and agile construction project management, practical construction projects in Hong Kong, dynamic project management, production control in PC4P, and target contracts optimization.   The book offers a valuable reference work for scientists, engineers, researchers and practitioners in industrial engineering and c...

  9. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  10. Toward solving the sign problem with path optimization method

    Science.gov (United States)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2017-12-01

    We propose a new approach to circumvent the sign problem in which the integration path is optimized to control the sign problem. We give a trial function specifying the integration path in the complex plane and tune it to optimize the cost function which represents the seriousness of the sign problem. We call it the path optimization method. In this method, we do not need to solve the gradient flow required in the Lefschetz-thimble method and then the construction of the integration-path contour arrives at the optimization problem where several efficient methods can be applied. In a simple model with a serious sign problem, the path optimization method is demonstrated to work well; the residual sign problem is resolved and precise results can be obtained even in the region where the global sign problem is serious.

  11. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    Energy Technology Data Exchange (ETDEWEB)

    Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz [Institute of Mathematics, Faculty of Mechanical Engineering, Brno University of Technology, Technicka 2, 616 69 Brno (Czech Republic)

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  12. Probabilistic methods for maintenance program optimization

    International Nuclear Information System (INIS)

    Liming, J.K.; Smith, M.J.; Gekler, W.C.

    1989-01-01

    In today's regulatory and economic environments, it is more important than ever that managers, engineers, and plant staff join together in developing and implementing effective management plans for safety and economic risk. This need applied to both power generating stations and other process facilities. One of the most critical parts of these management plans is the development and continuous enhancement of a maintenance program that optimizes plant or facility safety and profitability. The ultimate objective is to maximize the potential for station or facility success, usually measured in terms of projected financial profitability, while meeting or exceeding meaningful and reasonable safety goals, usually measured in terms of projected damage or consequence frequencies. This paper describes the use of the latest concepts in developing and evaluating maintenance programs to achieve maintenance program optimization (MPO). These concepts are based on significant field experience gained through the integration and application of fundamentals developed for industry and Electric Power Research Institute (EPRI)-sponsored projects on preventive maintenance (PM) program development and reliability-centered maintenance (RCM)

  13. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    Energy Technology Data Exchange (ETDEWEB)

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  14. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  15. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  16. A short numerical study on the optimization methods influence on topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Sigmund, Ole; Stolpe, Mathias

    2017-01-01

    Structural topology optimization problems are commonly defined using continuous design variables combined with material interpolation schemes. One of the challenges for density based topology optimization observed in the review article (Sigmund and Maute Struct Multidiscip Optim 48(6):1031–1055...... 2013) is the slow convergence that is often encountered in practice, when an almost solid-and-void design is found. The purpose of this forum article is to present some preliminary observations on how designs evolves during the optimization process for different choices of optimization methods...

  17. Present-day Problems and Methods of Optimization in Mechatronics

    Directory of Open Access Journals (Sweden)

    Tarnowski Wojciech

    2017-06-01

    Full Text Available It is justified that design is an inverse problem, and the optimization is a paradigm. Classes of design problems are proposed and typical obstacles are recognized. Peculiarities of the mechatronic designing are specified as a proof of a particle importance of optimization in the mechatronic design. Two main obstacles of optimization are discussed: a complexity of mathematical models and an uncertainty of the value system, in concrete case. Then a set of non-standard approaches and methods are presented and discussed, illustrated by examples: a fuzzy description, a constraint-based iterative optimization, AHP ranking method and a few MADM functions in Matlab.

  18. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....

  19. Climate change effects on extreme flows of water supply area in Istanbul: utility of regional climate models and downscaling method.

    Science.gov (United States)

    Kara, Fatih; Yucel, Ismail

    2015-09-01

    This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.

  20. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2015-01-01

    operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open

  1. Optimizing Usability Studies by Complementary Evaluation Methods

    NARCIS (Netherlands)

    Schmettow, Martin; Bach, Cedric; Scapin, Dominique

    2014-01-01

    This paper examines combinations of complementary evaluation methods as a strategy for efficient usability problem discovery. A data set from an earlier study is re-analyzed, involving three evaluation methods applied to two virtual environment applications. Results of a mixed-effects logistic

  2. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  3. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  4. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity

    NARCIS (Netherlands)

    Pellikaan, P.; van der Krogt, Marjolein; Carbone, Vincenzo; Fluit, René; Vigneron, L.M.; van Deun, J.; Verdonschot, Nicolaas Jacobus Joseph; Koopman, Hubertus F.J.M.

    2014-01-01

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based

  5. 16 CFR 1500.44 - Method for determining extremely flammable and flammable solids.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Method for determining extremely flammable and flammable solids. 1500.44 Section 1500.44 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS HAZARDOUS SUBSTANCES AND ARTICLES; ADMINISTRATION AND...

  6. A new simulation method for turbines in wake - Applied to extreme response during operation

    DEFF Research Database (Denmark)

    Thomsen, K.; Aagaard Madsen, H.

    2005-01-01

    The work focuses on prediction of load response for wind turbines operating in wind forms using a newly developed aeroelostic simulation method The traditionally used concept is to adjust the free flow turbulence intensity to account for increased loads in wind farms-a methodology that might......, the resulting extremes might be erroneous. For blade loads the traditionally used simplified approach works better than for integrated rotor loads-where the instantaneous load gradient across the rotor disc is causing the extreme loads. In the article the new wake simulation approach is illustrated...

  7. A Method for Solving Combinatoral Optimization Problems

    National Research Council Canada - National Science Library

    Ruffa, Anthony A

    2008-01-01

    .... The method discloses that when the boundaries create zones with boundary vertices confined to the adjacent zones, the sets of candidate HPs are found by advancing one zone at a time, considering...

  8. An efficient multilevel optimization method for engineering design

    Science.gov (United States)

    Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.

    1988-01-01

    An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.

  9. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  10. Gadolinium burnable absorber optimization by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Drumm, C.R.; Lee, J.C.

    1987-01-01

    The optimal axial distribution of gadolinium burnable poison in a pressurized water reactor is determined to yield an improved power distribution. The optimization scheme is based on Pontryagin's maximum principle, with the objective function accounting for a target power distribution. The conjugate gradients optimization method is used to solve the resulting Euler-Lagrange equations iteratively, efficiently handling the high degree of nonlinearity of the problem

  11. An optimization method for parameters in reactor nuclear physics

    International Nuclear Information System (INIS)

    Jachic, J.

    1982-01-01

    An optimization method for two basic problems of Reactor Physics was developed. The first is the optimization of a plutonium critical mass and the bruding ratio for fast reactors in function of the radial enrichment distribution of the fuel used as control parameter. The second is the maximization of the generation and the plutonium burnup by an optimization of power temporal distribution. (E.G.) [pt

  12. Instrument design optimization with computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)

    2017-08-01

    Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Qweak experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.

  13. Optimization of the southern electrophoretic transfer method

    International Nuclear Information System (INIS)

    Allison, M.A.; Fujimura, R.K.

    1987-01-01

    The technique of separating DNA fragments using agarose gel electrophoresis is essential in the analysis of nucleic acids. Further, after the method of transferring specific DNA fragments from those agarose gels to cellulose nitrate membranes was developed in 1975, a method was developed to transfer DNA, RNA, protein and ribonucleoprotein particles from various gels onto diazobenzyloxymethyl (DBM) paper using electrophoresis as well. This paper describes the optimum conditions for quantitative electrophoretic transfer of DNA onto nylon membranes. This method exemplifies the ability to hybridize the membrane more than once with specific RNA probes by providing sufficient retention of the DNA. Furthermore, the intrinsic properties of the nylon membrane allow for an increase in the efficiency and resolution of transfer while using somewhat harsh alkaline conditions. The use of alkaline conditions is of critical importance since we can now denature the DNA during transfer and thus only a short pre-treatment in acid is required for depurination. 9 refs., 7 figs

  14. A hybrid optimization method for biplanar transverse gradient coil design

    International Nuclear Information System (INIS)

    Qi Feng; Tang Xin; Jin Zhe; Jiang Zhongde; Shen Yifei; Meng Bin; Zu Donglin; Wang Weimin

    2007-01-01

    The optimization of transverse gradient coils is one of the fundamental problems in designing magnetic resonance imaging gradient systems. A new approach is presented in this paper to optimize the transverse gradient coils' performance. First, in the traditional spherical harmonic target field method, high order coefficients, which are commonly ignored, are used in the first stage of the optimization process to give better homogeneity. Then, some cosine terms are introduced into the series expansion of stream function. These new terms provide simulated annealing optimization with new freedoms. Comparison between the traditional method and the optimized method shows that the inhomogeneity in the region of interest can be reduced from 5.03% to 1.39%, the coil efficiency increased from 3.83 to 6.31 mT m -1 A -1 and the minimum distance of these discrete coils raised from 1.54 to 3.16 mm

  15. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    Science.gov (United States)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development

  16. Exact and useful optimization methods for microeconomics

    NARCIS (Netherlands)

    Balder, E.J.

    2011-01-01

    This paper points out that the treatment of utility maximization in current textbooks on microeconomic theory is deficient in at least three respects: breadth of coverage, completeness-cum-coherence of solution methods and mathematical correctness. Improvements are suggested in the form of a

  17. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid

    2014-01-01

    There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.

  18. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  19. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  20. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis

  1. Application of the Most Likely Extreme Response Method for Wave Energy Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang; Lawson, Michael

    2016-07-01

    Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based on spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.

  2. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  3. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  4. OPTIMAL SIGNAL PROCESSING METHODS IN GPR

    Directory of Open Access Journals (Sweden)

    Saeid Karamzadeh

    2014-01-01

    Full Text Available In the past three decades, a lot of various applications of Ground Penetrating Radar (GPR took place in real life. There are important challenges of this radar in civil applications and also in military applications. In this paper, the fundamentals of GPR systems will be covered and three important signal processing methods (Wavelet Transform, Matched Filter and Hilbert Huang will be compared to each other in order to get most accurate information about objects which are in subsurface or behind the wall.

  5. Optimization Methods for Supply Chain Activities

    Directory of Open Access Journals (Sweden)

    Balasescu S.

    2014-12-01

    Full Text Available This paper approach the theme of supply chain activities for medium and large companies which run many operations and need many facilities. The first goal is to analyse the influence of optimisation methods of supply chain activities on the success rate for a business. The second goal is to compare some logistic strategies applied by companies with the same profile to see which is the most effective. The final goal is to show which is the necessity of strategic optimum for a company and how can be achieved the considering the demand uncertainty.

  6. Application of improved AHP method to radiation protection optimization

    International Nuclear Information System (INIS)

    Wang Chuan; Zhang Jianguo; Yu Lei

    2014-01-01

    Aimed at the deficiency of traditional AHP method, a hierarchy model for optimum project selection of radiation protection was established with the improved AHP method. The result of comparison between the improved AHP method and the traditional AHP method shows that the improved AHP method can reduce personal judgment subjectivity, and its calculation process is compact and reasonable. The improved AHP method can provide scientific basis for radiation protection optimization. (authors)

  7. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Science.gov (United States)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  8. An analytical method for optimal design of MR valve structures

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B; Lee, Y S; Han, M S

    2009-01-01

    This paper proposes an analytical methodology for the optimal design of a magnetorheological (MR) valve structure. The MR valve structure is constrained in a specific volume and the optimization problem identifies geometric dimensions of the valve structure that maximize the yield stress pressure drop of a MR valve or the yield stress damping force of a MR damper. In this paper, the single-coil and two-coil annular MR valve structures are considered. After describing the schematic configuration and operating principle of a typical MR valve and damper, a quasi-static model is derived based on the Bingham model of a MR fluid. The magnetic circuit of the valve and damper is then analyzed by applying Kirchoff's law and the magnetic flux conservation rule. Based on quasi-static modeling and magnetic circuit analysis, the optimization problem of the MR valve and damper is built. In order to reduce the computation load, the optimization problem is simplified and a procedure to obtain the optimal solution of the simplified optimization problem is presented. The optimal solution of the simplified optimization problem of the MR valve structure constrained in a specific volume is then obtained and compared with the solution of the original optimization problem and the optimal solution obtained from the finite element method

  9. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2015-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  10. Advanced Topology Optimization Methods for Conceptual Architectural Design

    DEFF Research Database (Denmark)

    Aage, Niels; Amir, Oded; Clausen, Anders

    2014-01-01

    This paper presents a series of new, advanced topology optimization methods, developed specifically for conceptual architectural design of structures. The proposed computational procedures are implemented as components in the framework of a Grasshopper plugin, providing novel capacities...

  11. Distributed optimization for systems design : an augmented Lagrangian coordination method

    NARCIS (Netherlands)

    Tosserams, S.

    2008-01-01

    This thesis presents a coordination method for the distributed design optimization of engineering systems. The design of advanced engineering systems such as aircrafts, automated distribution centers, and microelectromechanical systems (MEMS) involves multiple components that together realize the

  12. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  13. [Optimized application of nested PCR method for detection of malaria].

    Science.gov (United States)

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  14. Optimization of the Runner for Extremely Low Head Bidirectional Tidal Bulb Turbine

    Directory of Open Access Journals (Sweden)

    Yongyao Luo

    2017-06-01

    Full Text Available This paper presents a multi-objective optimization procedure for bidirectional bulb turbine runners which is completed using ANSYS Workbench. The optimization procedure is able to check many more geometries with less manual work. In the procedure, the initial blade shape is parameterized, the inlet and outlet angles (β1, β2, as well as the starting and ending wrap angles (θ1, θ2 for the five sections of the blade profile, are selected as design variables, and the optimization target is set to obtain the maximum of the overall efficiency for the ebb and flood turbine modes. For the flow analysis, the ANSYS CFX code, with a SST (Shear Stress Transport k-ω turbulence model, has been used to evaluate the efficiency of the turbine. An efficient response surface model relating the design parameters and the objective functions is obtained. The optimization strategy was used to optimize a model bulb turbine runner. Model tests were carried out to validate the final designs and the design procedure. For the four-bladed turbine, the efficiency improvement is 5.5% in the ebb operation direction, and 2.9% in the flood operation direction, as well as 4.3% and 4.5% for the three-bladed turbine. Numerical simulations were then performed to analyze the pressure pulsation in the pressure and suction sides of the blade for the prototype turbine with optimal four-bladed and three-bladed runners. The results show that the runner rotational frequency (fn is the dominant frequency of the pressure pulsations in the blades for ebb and flood turbine modes, and the gravitational effect, rather than rotor-stator interaction (RSI, plays an important role in a low head horizontal axial turbine. The amplitudes of the pressure pulsations on the blade side facing the guide vanes varies little with the water head. However, the amplitudes of the pressure pulsations on the blade side facing the diffusion tube linearly increase with the water head. These results could provide

  15. Optimal PMU Placement with Uncertainty Using Pareto Method

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2012-01-01

    Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.

  16. A loading pattern optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1997-01-01

    Nuclear fuel reload of PWR core leads to the search of an optimal nuclear fuel assemblies distribution, namely of loading pattern. This large discrete optimization problem is here expressed as a cost function minimization. To deal with this problem, an approach based on gradient information is used to direct the search in the patterns discrete space. A method using an adjoint state formulation is then developed, and final results of complete patterns search tests by this method are presented. (author)

  17. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  18. Comparison of optimal design methods in inverse problems

    International Nuclear Information System (INIS)

    Banks, H T; Holm, K; Kappel, F

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst–Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667–77; De Gaetano A and Arino O 2000 J. Math. Biol. 40 136–68; Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979–90)

  19. Comparison of optimal design methods in inverse problems

    Science.gov (United States)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  20. Multi-objective optimization design method of radiation shielding

    International Nuclear Information System (INIS)

    Yang Shouhai; Wang Weijin; Lu Daogang; Chen Yixue

    2012-01-01

    Due to the shielding design goals of diversification and uncertain process of many factors, it is necessary to develop an optimization design method of intelligent shielding by which the shielding scheme selection will be achieved automatically and the uncertainties of human impact will be reduced. For economical feasibility to achieve a radiation shielding design for automation, the multi-objective genetic algorithm optimization of screening code which combines the genetic algorithm and discrete-ordinate method was developed to minimize the costs, size, weight, and so on. This work has some practical significance for gaining the optimization design of shielding. (authors)

  1. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-01-01

    Nuclear fuel management can be seen as a large discrete optimization problem under constraints, and optimization methods on such problems are numerically costly. After an introduction of the main aspects of nuclear fuel management, this paper presents a new way to treat the combinatorial problem by using information included in the gradient of optimized cost function. A new search process idea is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method. Finally, connections with classical simulated annealing and genetic algorithms are described as an attempt to improve search processes. 16 refs., 2 figs

  2. Modifying nodal pricing method considering market participants optimality and reliability

    Directory of Open Access Journals (Sweden)

    A. R. Soofiabadi

    2015-06-01

    Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.

  3. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  4. Climate change impact assessment on urban rainfall extremes and urban drainage: Methods and shortcomings

    DEFF Research Database (Denmark)

    Willems, P.; Arnbjerg-Nielsen, Karsten; Olsson, J.

    2012-01-01

    Cities are becoming increasingly vulnerable to flooding because of rapid urbanization, installation of complex infrastructure, and changes in the precipitation patterns caused by anthropogenic climate change. The present paper provides a critical review of the current state-of-the-art methods...... for assessing the impacts of climate change on precipitation at the urban catchment scale. Downscaling of results from global circulation models or regional climate models to urban catchment scales are needed because these models are not able to describe accurately the rainfall process at suitable high temporal...... and spatial resolution for urban drainage studies. The downscaled rainfall results are however highly uncertain, depending on the models and downscaling methods considered. This uncertainty becomes more challenging for rainfall extremes since the properties of these extremes do not automatically reflect those...

  5. On Equivalence between Optimality Criteria and Projected Gradient Methods with Application to Topology Optimization Problem

    OpenAIRE

    Ananiev, Sergey

    2006-01-01

    The paper demonstrates the equivalence between the optimality criteria (OC) method, initially proposed by Bendsoe & Kikuchi for topology optimization problem, and the projected gradient method. The equivalence is shown using Hestenes definition of Lagrange multipliers. Based on this development, an alternative formulation of the Karush-Kuhn-Tucker (KKT) condition is suggested. Such reformulation has some advantages, which will be also discussed in the paper. For verification purposes the modi...

  6. Thickness optimization of fiber reinforced laminated composites using the discrete material optimization method

    DEFF Research Database (Denmark)

    Sørensen, Søren Nørgaard; Lund, Erik

    2012-01-01

    This work concerns a novel large-scale multi-material topology optimization method for simultaneous determination of the optimum variable integer thickness and fiber orientation throughout laminate structures with fixed outer geometries while adhering to certain manufacturing constraints....... The conceptual combinatorial/integer problem is relaxed to a continuous problem and solved on basis of the so-called Discrete Material Optimization method, explicitly including the manufacturing constraints as linear constraints....

  7. EMA beamline at SIRIUS: extreme condition X-ray methods of analysis

    International Nuclear Information System (INIS)

    Souza Neto, Narcizo

    2016-01-01

    Full text: The EMA beamline (Extreme condition X-ray Methods of Analysis) is one of the hard x-ray undulator beamlines within the first phase of the new synchrotron source in Brazil (Sirius project). This beamline is thought to make a difference where a high brilliance (high flux of up to 2 x 10 14 photons/sec with beam size down to 0.5 x 0.5 μm 2 ) is essential, which is the case for extreme pressures that require small focus and time-resolved that require high photon flux. With that in mind we propose the beamline to have two experimental hutches to cover most of the extreme condition techniques today employed at synchrotron laboratories worldwide. These two stations are thought to provide the general infrastructure for magnets and lasers experiments, which may evolve as new scientific problems appear. In addition to the hutches, support laboratories will be strongly linked and supportive to the experiments at the beamline, covering high pressure instrumentations using diamond anvil cells and pump-and-probe requirements for ultrafast and high power lasers. Along these lines, we will describe the following techniques covered at this beamline: magnetic spectroscopy (XMCD) and scattering (XRMS) under high pressure and very low temperature in order to fully probe both ferromagnetic and antiferromagnetic materials and the dependence with pressure; extreme pressure and temperature XRD and XAS experiments using very small diamond culet anvils and high power lasers. (author)

  8. EMA beamline at SIRIUS: extreme condition X-ray methods of analysis

    Energy Technology Data Exchange (ETDEWEB)

    Souza Neto, Narcizo, E-mail: narcizo.souza@lnls.br [Centro Nacional de Pesquisa em Energia e Materiais (CNPEM), Campinas, SP (Brazil)

    2016-07-01

    Full text: The EMA beamline (Extreme condition X-ray Methods of Analysis) is one of the hard x-ray undulator beamlines within the first phase of the new synchrotron source in Brazil (Sirius project). This beamline is thought to make a difference where a high brilliance (high flux of up to 2 x 10{sup 14} photons/sec with beam size down to 0.5 x 0.5 μm{sup 2}) is essential, which is the case for extreme pressures that require small focus and time-resolved that require high photon flux. With that in mind we propose the beamline to have two experimental hutches to cover most of the extreme condition techniques today employed at synchrotron laboratories worldwide. These two stations are thought to provide the general infrastructure for magnets and lasers experiments, which may evolve as new scientific problems appear. In addition to the hutches, support laboratories will be strongly linked and supportive to the experiments at the beamline, covering high pressure instrumentations using diamond anvil cells and pump-and-probe requirements for ultrafast and high power lasers. Along these lines, we will describe the following techniques covered at this beamline: magnetic spectroscopy (XMCD) and scattering (XRMS) under high pressure and very low temperature in order to fully probe both ferromagnetic and antiferromagnetic materials and the dependence with pressure; extreme pressure and temperature XRD and XAS experiments using very small diamond culet anvils and high power lasers. (author)

  9. An historical survey of computational methods in optimal control.

    Science.gov (United States)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  10. Method for Determining Optimal Residential Energy Efficiency Retrofit Packages

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.

    2011-04-01

    Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.

  11. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  12. Optimal power flow: a bibliographic survey I. Formulations and deterministic methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  13. On some other preferred method for optimizing the welded joint

    Directory of Open Access Journals (Sweden)

    Pejović Branko B.

    2016-01-01

    Full Text Available The paper shows an example of performed optimization of sizes in terms of welding costs in a characteristic loaded welded joint. Hence, in the first stage, the variables and constant parameters are defined, and mathematical shape of the optimization function is determined. The following stage of the procedure defines and places the most important constraint functions that limit the design of structures, that the technologist and the designer should take into account. Subsequently, a mathematical optimization model of the problem is derived, that is efficiently solved by a proposed method of geometric programming. Further, a mathematically based thorough optimization algorithm is developed of the proposed method, with a main set of equations defining the problem that are valid under certain conditions. Thus, the primary task of optimization is reduced to the dual task through a corresponding function, which is easier to solve than the primary task of the optimized objective function. The main reason for this is a derived set of linear equations. Apparently, a correlation is used between the optimal primary vector that minimizes the objective function and the dual vector that maximizes the dual function. The method is illustrated on a computational practical example with a different number of constraint functions. It is shown that for the case of a lower level of complexity, a solution is reached through an appropriate maximization of the dual function by mathematical analysis and differential calculus.

  14. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  15. Review: Optimization methods for groundwater modeling and management

    Science.gov (United States)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  16. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  17. ROTAX: a nonlinear optimization program by axes rotation method

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1977-09-01

    A nonlinear optimization program employing the axes rotation method has been developed for solving nonlinear problems subject to nonlinear inequality constraints and its stability and convergence efficiency were examined. The axes rotation method is a direct search of the optimum point by rotating the orthogonal coordinate system in a direction giving the minimum objective. The searching direction is rotated freely in multi-dimensional space, so the method is effective for the problems represented with the contours having deep curved valleys. In application of the axes rotation method to the optimization problems subject to nonlinear inequality constraints, an improved version of R.R. Allran and S.E.J. Johnsen's method is used, which deals with a new objective function composed of the original objective and a penalty term to consider the inequality constraints. The program is incorporated in optimization code system SCOOP. (auth.)

  18. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm......This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...

  19. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  20. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  1. Inter-comparison of statistical downscaling methods for projection of extreme precipitation in Europe

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Hundecha, Y.; Lawrence, D.

    impact studies. Four methods are based on change factors and four are bias correction methods. The change factor methods perturb the observations according to changes in precipitation properties estimated from the Regional Climate Models (RCMs). The bias correction methods correct the output from...... the RCMs. The eight methods are used to downscale precipitation output from fifteen RCMs from the ENSEMBLES project for eleven catchments in Europe. The performance of the bias correction methods depends on the catchment, but in all cases they represent an improvement compared to RCM output. The overall...... results point to an increase in extreme precipitation in all the catchments in winter and in most catchments in summer. For each catchment, the results tend to agree on the direction of the change but differ in the magnitude. These differences can be mainly explained due to differences in the RCMs....

  2. SOLVING ENGINEERING OPTIMIZATION PROBLEMS WITH THE SWARM INTELLIGENCE METHODS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available An important stage in problem solving process for aerospace and aerostructures designing is calculating their main charac- teristics optimization. The results of the four constrained optimization problems related to the design of various technical systems: such as determining the best parameters of welded beams, pressure vessel, gear, spring are presented. The purpose of each task is to minimize the cost and weight of the construction. The object functions in optimization practical problem are nonlinear functions with a lot of variables and a complex layer surface indentations. That is why using classical approach for extremum seeking is not efficient. Here comes the necessity of using such methods of optimization that allow to find a near optimal solution in acceptable amount of time with the minimum waste of computer power. Such methods include the methods of Swarm Intelligence: spiral dy- namics algorithm, stochastic diffusion search, hybrid seeker optimization algorithm. The Swarm Intelligence methods are designed in such a way that a swarm consisting of agents carries out the search for extremum. In search for the point of extremum, the parti- cles exchange information and consider their experience as well as the experience of population leader and the neighbors in some area. To solve the listed problems there has been designed a program complex, which efficiency is illustrated by the solutions of four applied problems. Each of the considered applied optimization problems is solved with all the three chosen methods. The ob- tained numerical results can be compared with the ones found in a swarm with a particle method. The author gives recommenda- tions on how to choose methods parameters and penalty function value, which consider inequality constraints.

  3. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    Science.gov (United States)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example

  4. Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rasmussen, Peter F.; Rosbjerg, Dan

    1997-01-01

    Two different models for analyzing extreme hydrologic events, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto distribution for modeling threshold exceedances corresponding to a generalized extreme value......). In the case of ML estimation, the PDS model provides the most efficient T-year event estimator. In the cases of MOM and PWM estimation, the PDS model is generally preferable for negative shape parameters, whereas the AMS model yields the most efficient estimator for positive shape parameters. A comparison...... of the considered methods reveals that in general, one should use the PDS model with MOM estimation for negative shape parameters, the PDS model with exponentially distributed exceedances if the shape parameter is close to zero, the AMS model with MOM estimation for moderately positive shape parameters, and the PDS...

  5. A Finite Element Removal Method for 3D Topology Optimization

    Directory of Open Access Journals (Sweden)

    M. Akif Kütük

    2013-01-01

    Full Text Available Topology optimization provides great convenience to designers during the designing stage in many industrial applications. With this method, designers can obtain a rough model of any part at the beginning of a designing stage by defining loading and boundary conditions. At the same time the optimization can be used for the modification of a product which is being used. Lengthy solution time is a disadvantage of this method. Therefore, the method cannot be widespread. In order to eliminate this disadvantage, an element removal algorithm has been developed for topology optimization. In this study, the element removal algorithm is applied on 3-dimensional parts, and the results are compared with the ones available in the related literature. In addition, the effects of the method on solution times are investigated.

  6. An analytical optimization method for electric propulsion orbit transfer vehicles

    International Nuclear Information System (INIS)

    Oleson, S.R.

    1993-01-01

    Due to electric propulsion's inherent propellant mass savings over chemical propulsion, electric propulsion orbit transfer vehicles (EPOTVs) are a highly efficient mode of orbit transfer. When selecting an electric propulsion device (ion, MPD, or arcjet) and propellant for a particular mission, it is preferable to use quick, analytical system optimization methods instead of time intensive numerical integration methods. It is also of interest to determine each thruster's optimal operating characteristics for a specific mission. Analytical expressions are derived which determine the optimal specific impulse (Isp) for each type of electric thruster to maximize payload fraction for a desired thrusting time. These expressions take into account the variation of thruster efficiency with specific impulse. Verification of the method is made with representative electric propulsion values on a LEO-to-GEO mission. Application of the method to specific missions is discussed

  7. Improving Battery Reactor Core Design Using Optimization Method

    International Nuclear Information System (INIS)

    Son, Hyung M.; Suh, Kune Y.

    2011-01-01

    The Battery Omnibus Reactor Integral System (BORIS) is a small modular fast reactor being designed at Seoul National University to satisfy various energy demands, to maintain inherent safety by liquid-metal coolant lead for natural circulation heat transport, and to improve power conversion efficiency with the Modular Optimal Balance Integral System (MOBIS) using the supercritical carbon dioxide as working fluid. This study is focused on developing the Neutronics Optimized Reactor Analysis (NORA) method that can quickly generate conceptual design of a battery reactor core by means of first principle calculations, which is part of the optimization process for reactor assembly design of BORIS

  8. Polyhedral and semidefinite programming methods in combinatorial optimization

    CERN Document Server

    Tunçel, Levent

    2010-01-01

    Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r

  9. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  10. Enhanced Multi-Objective Energy Optimization by a Signaling Method

    OpenAIRE

    Soares, João; Borges, Nuno; Vale, Zita; Oliveira, P.B.

    2016-01-01

    In this paper three metaheuristics are used to solve a smart grid multi-objective energy management problem with conflictive design: how to maximize profits and minimize carbon dioxide (CO2) emissions, and the results compared. The metaheuristics implemented are: weighted particle swarm optimization (W-PSO), multi-objective particle swarm optimization (MOPSO) and non-dominated sorting genetic algorithm II (NSGA-II). The performance of these methods with the use of multi-dimensi...

  11. Efficient solution method for optimal control of nuclear systems

    International Nuclear Information System (INIS)

    Naser, J.A.; Chambre, P.L.

    1981-01-01

    To improve the utilization of existing fuel sources, the use of optimization techniques is becoming more important. A technique for solving systems of coupled ordinary differential equations with initial, boundary, and/or intermediate conditions is given. This method has a number of inherent advantages over existing techniques as well as being efficient in terms of computer time and space requirements. An example of computing the optimal control for a spatially dependent reactor model with and without temperature feedback is given. 10 refs

  12. Optimal layout of radiological environment monitoring based on TOPSIS method

    International Nuclear Information System (INIS)

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  13. Primal-Dual Interior Point Multigrid Method for Topology Optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf

  14. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    However, no reliable method for quantitative analysis of clay minerals has been established so far. In this study, an attempt was made to propose an optimization method for the quantitative ... 2. Basic principles. The mineralogical constitution of soil is rather complex. ... K2O, MgO, and TFe as variables for the calculation.

  15. A new hybrid optimization method inspired from swarm intelligence: Fuzzy adaptive swallow swarm optimization algorithm (FASSO

    Directory of Open Access Journals (Sweden)

    Mehdi Neshat

    2015-11-01

    Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.

  16. Scintigraphic method for evaluating reductions in local blood volumes in human extremities

    DEFF Research Database (Denmark)

    Blønd, L; Madsen, Jan Lysgård

    2000-01-01

    in the experiment. Evaluation of one versus two scintigraphic projections, trials for assessment of the reproducibility, a comparison of the scintigraphic method with a water-plethysmographic method and registration of the fractional reduction in blood volume caused by exsanguination as a result of simple elevation......% in the lower limb experiment and 6% in the upper limb experiment. We found a significant relation (r = 0.42, p = 0.018) between the results obtained by the scintigraphic method and the plethysmographic method. In fractions, a mean reduction in blood volume of 0.49+0.14 (2 SD) was found after 1 min of elevation......We introduce a new method for evaluating reductions in local blood volumes in extremities, based on the combined use of autologue injection of 99mTc-radiolabelled erythrocytes and clamping of the limb blood flow by the use of a tourniquet. Twenty-two healthy male volunteers participated...

  17. Deterministic methods for multi-control fuel loading optimization

    Science.gov (United States)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  18. Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    Science.gov (United States)

    Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.

    2017-12-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.

  19. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea [School of Mechanical Engineering, Sungkyunkwan University, Seoul (Korea, Republic of)

    2015-03-15

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively.

  20. Shape optimization of high power centrifugal compressor using multi-objective optimal method

    International Nuclear Information System (INIS)

    Kang, Hyun Soo; Lee, Jeong Min; Kim, Youn Jea

    2015-01-01

    In this study, a method for optimal design of impeller and diffuser blades in the centrifugal compressor using response surface method (RSM) and multi-objective genetic algorithm (MOGA) was evaluated. A numerical simulation was conducted using ANSYS CFX with various values of impeller and diffuser parameters, which consist of leading edge (LE) angle, trailing edge (TE) angle, and blade thickness. Each of the parameters was divided into three levels. A total of 45 design points were planned using central composite design (CCD), which is one of the design of experiment (DOE) techniques. Response surfaces that were generated on the basis of the results of DOE were used to determine the optimal shape of impeller and diffuser blade. The entire process of optimization was conducted using ANSYS Design Xplorer (DX). Through the optimization, isentropic efficiency and pressure recovery coefficient, which are the main performance parameters of the centrifugal compressor, were increased by 0.3 and 5, respectively

  1. Perioperative Optimization of Geriatric Lower Extremity Bypass in the Era of Increased Performance Accountability.

    Science.gov (United States)

    Adkar, Shaunak S; Turley, Ryan S; Benrashid, Ehsan; Lagoo, Sandhya; Shortell, Cynthia K; Mureebe, Leila

    2017-01-01

    The initiation of bundled payment for care improvement by Centers for Medicare and Medicaid Services (CMS) has led to increased financial and performance accountability. As most vascular surgery patients are elderly and reimbursed via CMS, improving their outcomes will be critical for durable financial stability. As a first step in forming a multidisciplinary pathway for the elderly vascular patients, we sought to identify modifiable perioperative variables in geriatric patients undergoing lower extremity bypass (LEB). The 2011-2013 LEB-targeted American College of Surgeons National Surgical Quality Improvement Program database was used for this analysis (n = 5316). Patients were stratified by age <65 (n = 2171), 65-74 (n = 1858), 75-84 (n = 1190), and ≥85 (n = 394) years. Comparisons of patient- and procedure-related characteristics and 30-day postoperative outcomes stratified by age groups were performed with Pearson χ 2 tests for categorical variables and Wilcoxon rank-sum tests for continuous variables. During the study period, 5316 total patients were identified. There were 2171 patients aged <65 years, 1858 patients in the 65-74 years age group, 1190 patients in the 75-84 years age group, and 394 patients in the ≥85 years age group. Increasing age was associated with an increased frequency of cardiopulmonary disease (P < 0.001) and a decreased frequency of diabetes, tobacco use, and prior surgical intervention (P < 0.001). Only 79% and 68% of all patients were on antiplatelet and statin therapies, respectively. Critical limb ischemia occurred more frequently in older patients (P < 0.001). Length of hospital stay, transfusion requirements, and discharge to a skilled nursing facility increased with age (P < 0.001). Thirty-day amputation rates did not differ significantly with age (P = 0.12). Geriatric patients undergoing LEB have unique and potentially modifiable perioperative factors that may improve postoperative outcomes. These

  2. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  3. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  4. Regional frequency analysis of extreme rainfalls using partial L moments method

    Science.gov (United States)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  5. Discontinuous Galerkin method for computing gravitational waveforms from extreme mass ratio binaries

    International Nuclear Information System (INIS)

    Field, Scott E; Hesthaven, Jan S; Lau, Stephen R

    2009-01-01

    Gravitational wave emission from extreme mass ratio binaries (EMRBs) should be detectable by the joint NASA-ESA LISA project, spurring interest in analytical and numerical methods for investigating EMRBs. We describe a discontinuous Galerkin (dG) method for solving the distributionally forced 1+1 wave equations which arise when modeling EMRBs via the perturbation theory of Schwarzschild black holes. Despite the presence of jump discontinuities in the relevant polar and axial gravitational 'master functions', our dG method achieves global spectral accuracy, provided that we know the instantaneous position, velocity and acceleration of the small particle. Here these variables are known, since we assume that the particle follows a timelike geodesic of the Schwarzschild geometry. We document the results of several numerical experiments testing our method, and in our concluding section discuss the possible inclusion of gravitational self-force effects.

  6. METHOD OF CALCULATING THE OPTIMAL HEAT EMISSION GEOTHERMAL WELLS

    Directory of Open Access Journals (Sweden)

    A. I. Akaev

    2015-01-01

    Full Text Available This paper presents a simplified method of calculating the optimal regimes of the fountain and the pumping exploitation of geothermal wells, reducing scaling and corrosion during operation. Comparative characteristics to quantify the heat of formation for these methods of operation under the same pressure at the wellhead. The problem is solved graphic-analytical method based on a balance of pressure in the well with the heat pump. 

  7. Non-linear programming method in optimization of fast reactors

    International Nuclear Information System (INIS)

    Pavelesku, M.; Dumitresku, Kh.; Adam, S.

    1975-01-01

    Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)

  8. Optimization of Inventories for Multiple Companies by Fuzzy Control Method

    OpenAIRE

    Kawase, Koichi; Konishi, Masami; Imai, Jun

    2008-01-01

    In this research, Fuzzy control theory is applied to the inventory control of the supply chain between multiple companies. The proposed control method deals with the amountof inventories expressing supply chain between multiple companies. Referring past demand and tardiness, inventory amounts of raw materials are determined by Fuzzy inference. The method that an appropriate inventory control becomes possible optimizing fuzzy control gain by using SA method for Fuzzy control. The variation of ...

  9. QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION.

    Science.gov (United States)

    Fan, Jianqing; Ke, Zheng Tracy; Liu, Han; Xia, Lucy

    We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method-named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)-for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

  10. Optimal volume of injectate for fluoroscopy-guided cervical interlaminar epidural injection in patients with neck and upper extremity pain.

    Science.gov (United States)

    Park, Jun Young; Kim, Doo Hwan; Lee, Kunhee; Choi, Seong-Soo; Leem, Jeong-Gil

    2016-10-01

    There is no study of optimal volume of contrast medium to use in cervical interlaminar epidural injections (CIEIs) for appropriate spread to target lesions. To determine optimal volume of contrast medium to use in CIEIs. We analyzed the records of 80 patients who had undergone CIEIs. Patients were divided into 3 groups according to the amount of contrast: 3, 4.5, and 6 mL. The spread of medium to the target level was analyzed. Numerical rating scale data were also analyzed. The dye had spread to a point above the target level in 15 (78.9%), 22 (84.6%), and 32 (91.4%) patients in groups 1 to 3, respectively. The dye reached both sides in 14 (73.7%), 18 (69.2%), and 23 (65.7%) patients, and reached the ventral epidural space in 15 (78.9%), 22 (84.6%), and 30 (85.7%) patients, respectively. There were no significant differences of contrast spread among the groups. There were no significant differences in the numerical rating scale scores among the groups during the 3 months. When performing CIEIs, 3 mL medication is sufficient volume for the treatment of neck and upper-extremity pain induced by lower cervical degenerative disease.

  11. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  12. On projection methods, convergence and robust formulations in topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence.......Mesh convergence and manufacturability of topology optimized designs have previously mainly been assured using density or sensitivity based filtering techniques. The drawback of these techniques has been gray transition regions between solid and void parts, but this problem has recently been...

  13. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-08

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  14. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-01

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  15. Exergetic optimization of turbofan engine with genetic algorithm method

    Energy Technology Data Exchange (ETDEWEB)

    Turan, Onder [Anadolu University, School of Civil Aviation (Turkey)], e-mail: onderturan@anadolu.edu.tr

    2011-07-01

    With the growth of passenger numbers, emissions from the aeronautics sector are increasing and the industry is now working on improving engine efficiency to reduce fuel consumption. The aim of this study is to present the use of genetic algorithms, an optimization method based on biological principles, to optimize the exergetic performance of turbofan engines. The optimization was carried out using exergy efficiency, overall efficiency and specific thrust of the engine as evaluation criteria and playing on pressure and bypass ratio, turbine inlet temperature and flight altitude. Results showed exergy efficiency can be maximized with higher altitudes, fan pressure ratio and turbine inlet temperature; the turbine inlet temperature is the most important parameter for increased exergy efficiency. This study demonstrated that genetic algorithms are effective in optimizing complex systems in a short time.

  16. Coordinated Optimal Operation Method of the Regional Energy Internet

    Directory of Open Access Journals (Sweden)

    Rishang Long

    2017-05-01

    Full Text Available The development of the energy internet has become one of the key ways to solve the energy crisis. This paper studies the system architecture, energy flow characteristics and coordinated optimization method of the regional energy internet. Considering the heat-to-electric ratio of a combined cooling, heating and power unit, energy storage life and real-time electricity price, a double-layer optimal scheduling model is proposed, which includes economic and environmental benefit in the upper layer and energy efficiency in the lower layer. A particle swarm optimizer–individual variation ant colony optimization algorithm is used to solve the computational efficiency and accuracy. Through the calculation and simulation of the simulated system, the energy savings, level of environmental protection and economic optimal dispatching scheme are realized.

  17. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  18. Study of a pressure measurement method using laser ionization for extremely-high vacuum

    International Nuclear Information System (INIS)

    Kokubun, Kiyohide

    1991-01-01

    A method of measuring pressures in the range of extremely-high vacuum (XHV) using the laser ionization has been studied. For this purpose, nonresonant multiphoton ionization of various kinds of gases has been studied, and highly-sensitive ion-detection systems and an extremely-high vacuum equipment were fabricated. These results are presented in detail. Two ion-detection systems were fabricated and tested: the one is based on the pulse-counting method, and the other utilizes the image-processing technique. The former is superior in detecting a few ions or less. The latter was processing technique. The former is superior in detecting a few ions or less. The latter was verified to able to count accurately the number of ions in the range of a few to several hundreds. To obtain the information on residual gases and test our pressure measurement system, an extremely-high vacuum system was fabricated in our own fashion, attained a pressure lower than 1 x 10 -10 Pa, measured with an extractor gauge. The outgassing rate of this vacuum vessel was measured to be 7.8 x 10 -11 Pa·m 3 /s·m 2 . The surface structures and the surface compositions of the raw material, the machined material, and the machined-and-outgased material were studied by SEM and AES. Besides, the pumping characteristics and the residual gases of the XHV system were investigated in detail at each pumping stage. On the course of these studies, the method of pressure measurement using the laser-ionization has been verified to be very effective for measuring pressures in XHV. (J.P.N.)

  19. Surveillance of extreme hyperbilirubinaemia in Denmark. A method to identify the newborn infants

    DEFF Research Database (Denmark)

    Bjerre, J.V.; Petersen, Jes Reinholdt; Ebbesen, F.

    2008-01-01

    birth and the others after having been discharged. The maximum TSB was 485 (450-734) micromol/L (median [range]) and appeared latest amongst those infants admitted from home, but was not different from the maximum TSB of the nondischarged infants. Forty-three infants had symptoms of early-phase acute....... The observed incidence of extreme hyperbilirubinaemia is higher than previously reported in Denmark. This is mainly due to a very sensitive method of identifying the study group Udgivelsesdato: 2008/8...

  20. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    Science.gov (United States)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  1. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity.

    Science.gov (United States)

    Pellikaan, P; van der Krogt, M M; Carbone, V; Fluit, R; Vigneron, L M; Van Deun, J; Verdonschot, N; Koopman, H F J M

    2014-03-21

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based on the assumption of a relation between the bone geometry and the location of muscle attachment sites. The aim of this study was to evaluate the accuracy of this morphing based method. Two cadaver dissections were performed to measure the contours of 72 muscle attachment sites on the pelvis, femur, tibia and calcaneus. The geometry of the bones including the muscle attachment sites was morphed from one cadaver to the other and vice versa. For 69% of the muscle attachment sites, the mean distance between the measured and morphed muscle attachment sites was smaller than 15 mm. Furthermore, the muscle attachment sites that had relatively large distances had shown low sensitivity to these deviations. Therefore, this morphing based method is a promising tool for estimating subject-specific muscle attachment sites in the lower extremity in a fast and automated manner. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  3. Panorama parking assistant system with improved particle swarm optimization method

    Science.gov (United States)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  4. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  5. Optimal control methods for rapidly time-varying Hamiltonians

    International Nuclear Information System (INIS)

    Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.

    2011-01-01

    In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.

  6. Control and Optimization Methods for Electric Smart Grids

    CERN Document Server

    Ilić, Marija

    2012-01-01

    Control and Optimization Methods for Electric Smart Grids brings together leading experts in power, control and communication systems,and consolidates some of the most promising recent research in smart grid modeling,control and optimization in hopes of laying the foundation for future advances in this critical field of study. The contents comprise eighteen essays addressing wide varieties of control-theoretic problems for tomorrow’s power grid. Topics covered include: Control architectures for power system networks with large-scale penetration of renewable energy and plug-in vehicles Optimal demand response New modeling methods for electricity markets Control strategies for data centers Cyber-security Wide-area monitoring and control using synchronized phasor measurements. The authors present theoretical results supported by illustrative examples and practical case studies, making the material comprehensible to a wide audience. The results reflect the exponential transformation that today’s grid is going...

  7. Optimal overlapping of waveform relaxation method for linear differential equations

    International Nuclear Information System (INIS)

    Yamada, Susumu; Ozawa, Kazufumi

    2000-01-01

    Waveform relaxation (WR) method is extremely suitable for solving large systems of ordinary differential equations (ODEs) on parallel computers, but the convergence of the method is generally slow. In order to accelerate the convergence, the methods which decouple the system into many subsystems with overlaps some of the components between the adjacent subsystems have been proposed. The methods, in general, converge much faster than the ones without overlapping, but the computational cost per iteration becomes larger due to the increase of the dimension of each subsystem. In this research, the convergence of the WR method for solving constant coefficients linear ODEs is investigated and the strategy to determine the number of overlapped components which minimizes the cost of the parallel computations is proposed. Numerical experiments on an SR2201 parallel computer show that the estimated number of the overlapped components by the proposed strategy is reasonable. (author)

  8. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  9. New Methods of Treatment for Trophic Lesions of the Lower Extremities in Patients with Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    S.V. Bolgarska

    2016-08-01

    Full Text Available Introduction. Complications in the form of trophic ulcers of the lower extremities are one of the serious consequences of diabetes mellitus (DM, as they often lead to severe health and social problems, up to high amputations. The aim of the study was the development and clinical testing of diagnostic and therapeutic algorithm for the comprehensive treatment of trophic ulcers of the lower extremities in patients with DM. Materials and methods. Here are presented the results of treatment of 63 patients (42 women and 21 men with neuropathic type of trophic lesions of the lower limbs or postoperative defects at the stage of granulation. Of them, 32 patients (study group received local intradermal injections of hyaluronic acid preparations and sodium succinate (Lacerta into the extracellular matrix. Patients of the comparison group were treated with hydrocolloid materials (hydrocoll, granuflex. The level of glycated hemoglobin, the degree of circulatory disorders (using ankle brachial index, before and after the test with a load and neuropathic disorders (on a scale for the evaluation of neurologic dysfunctions — NDS were assessed in patients. Results. The results of treatment were assessed by the rate of defect healing during 2 or more months. In the study group, 24 patients showed complete healing of the defect (75 %, while in the control group the healing was observed in 16 patients (51.6 %. During the year, relapses occurred in 22.2 % of cases in the study group, and in 46.9 % — in the control one (p < 0.05. Conclusion. The developed method of treatment using Lacerta allowed to increase the effectiveness of therapy, to speed recovery, to decrease a number of complications in patients with DM and trophic ulcers of the lower extremities.

  10. Clustering methods for the optimization of atomic cluster structure

    Science.gov (United States)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  11. Response surface method to optimize the low cost medium for ...

    African Journals Online (AJOL)

    A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

  12. Optimization Methods in Operations Research and Systems Analysis

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 6. Optimization Methods in Operations Research and Systems Analysis. V G Tikekar. Book Review Volume 2 Issue 6 June 1997 pp 91-92. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  14. An Optimal Calibration Method for a MEMS Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Bin Fang

    2014-02-01

    Full Text Available An optimal calibration method for a micro-electro-mechanical inertial measurement unit (MIMU is presented in this paper. The accuracy of the MIMU is highly dependent on calibration to remove the deterministic errors of systematic errors, which also contain random errors. The overlapping Allan variance is applied to characterize the types of random error terms in the measurements. The calibration model includes package misalignment error, sensor-to-sensor misalignment error and bias, and a scale factor is built. The new concept of a calibration method, which includes a calibration scheme and a calibration algorithm, is proposed. The calibration scheme is designed by D-optimal and the calibration algorithm is deduced by a Kalman filter. In addition, the thermal calibration is investigated, as the bias and scale factor varied with temperature. The simulations and real tests verify the effectiveness of the proposed calibration method and show that it is better than the traditional method.

  15. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  16. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  17. Exergetic optimization of a thermoacoustic engine using the particle swarm optimization method

    International Nuclear Information System (INIS)

    Chaitou, Hussein; Nika, Philippe

    2012-01-01

    Highlights: ► Optimization of a thermoacoustic engine using the particle swarm optimization method. ► Exergetic efficiency, acoustic power and their product are the optimized functions. ► PSO method is used successfully for the first time in the TA research. ► The powerful PSO tool is advised to be more involved in the TA research and design. ► EE times AP optimized function is highly recommended to design any new TA devices. - Abstract: Thermoacoustic engines convert heat energy into acoustic energy. Then, the acoustic energy can be used to pump heat or to generate electricity. It is well-known that the acoustic energy and therefore the exergetic efficiency depend on parameters such as the stack’s hydraulic radius, the stack’s position in the resonator and the traveling–standing-wave ratio. In this paper, these three parameters are investigated in order to study and analyze the best value of the produced acoustic energy, the exergetic efficiency and the product of the acoustic energy by the exergetic efficiency of a thermoacoustic engine with a parallel-plate stack. The dimensionless expressions of the thermoacoustic equations are derived and calculated. Then, the Particle Swarm Optimization method (PSO) is introduced and used for the first time in the thermoacoustic research. The use of the PSO method and the optimization of the acoustic energy multiplied by the exergetic efficiency are novel contributions to this domain of research. This paper discusses some significant conclusions which are useful for the design of new thermoacoustic engines.

  18. Comparison of different statistical downscaling methods to estimate changes in hourly extreme precipitation using RCM projections from ENSEMBLES

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Gregersen, Ida Bülow; Rosbjerg, Dan

    2015-01-01

    change method for extreme events, a weather generator combined with a disaggregation method and a climate analogue method. All three methods rely on different assumptions and use different outputs from the regional climate models (RCMs). The results of the three methods point towards an increase...... in extreme precipitation but the magnitude of the change varies depending on the RCM used and the spatial location. In general, a similar mean change is obtained for the three methods. This adds confidence in the results as each method uses different information from the RCMs. The results of this study...

  19. Superalloy design - A Monte Carlo constrained optimization method

    CSIR Research Space (South Africa)

    Stander, CM

    1996-01-01

    Full Text Available optimization method C. M. Stander Division of Materials Science and Technology, CSIR, PO Box 395, Pretoria, Republic of South Africa Received 74 March 1996; accepted 24 June 1996 A method, based on Monte Carlo constrained... successful hit, i.e. when Liow < LMP,,, < Lhiph, and for all the properties, Pj?, < P, < Pi@?. If successful this hit falls within the ROA. Repeat steps 4 and 5 to find at least ten (or more) successful hits with values...

  20. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Mihaela STET

    2016-12-01

    Full Text Available The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  1. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  2. Several Guaranteed Descent Conjugate Gradient Methods for Unconstrained Optimization

    Directory of Open Access Journals (Sweden)

    San-Yang Liu

    2014-01-01

    Full Text Available This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent condition gkTdk≤-1-1/4θkgk2  θk>1/4 and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.

  3. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  4. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    OpenAIRE

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  5. Cocoa agroforestry is less resilient to sub-optimal and extreme climate than cocoa in full sun.

    Science.gov (United States)

    Abdulai, Issaka; Vaast, Philippe; Hoffmann, Munir P; Asare, Richard; Jassogne, Laurence; Van Asten, Piet; Rötter, Reimund P; Graefe, Sophie

    2018-01-01

    Cocoa agroforestry is perceived as potential adaptation strategy to sub-optimal or adverse environmental conditions such as drought. We tested this strategy over wet, dry and extremely dry periods comparing cocoa in full sun with agroforestry systems: shaded by (i) a leguminous tree species, Albizia ferruginea and (ii) Antiaris toxicaria, the most common shade tree species in the region. We monitored micro-climate, sap flux density, throughfall, and soil water content from November 2014 to March 2016 at the forest-savannah transition zone of Ghana with climate and drought events during the study period serving as proxy for projected future climatic conditions in marginal cocoa cultivation areas of West Africa. Combined transpiration of cocoa and shade trees was significantly higher than cocoa in full sun during wet and dry periods. During wet period, transpiration rate of cocoa plants shaded by A. ferruginea was significantly lower than cocoa under A. toxicaria and full sun. During the extreme drought of 2015/16, all cocoa plants under A. ferruginea died. Cocoa plants under A. toxicaria suffered 77% mortality and massive stress with significantly reduced sap flux density of 115 g cm -2  day -1 , whereas cocoa in full sun maintained higher sap flux density of 170 g cm -2  day -1 . Moreover, cocoa sap flux recovery after the extreme drought was significantly higher in full sun (163 g cm -2  day -1 ) than under A. toxicaria (37 g cm -2  day -1 ). Soil water content in full sun was higher than in shaded systems suggesting that cocoa mortality in the shaded systems was linked to strong competition for soil water. The present results have major implications for cocoa cultivation under climate change. Promoting shade cocoa agroforestry as drought resilient system especially under climate change needs to be carefully reconsidered as shade tree species such as the recommended leguminous A. ferruginea constitute major risk to cocoa functioning under

  6. Two optimal control methods for PWR core control

    International Nuclear Information System (INIS)

    Karppinen, J.; Blomsnes, B.; Versluis, R.M.

    1976-01-01

    The Multistage Mathematical Programming (MMP) and State Variable Feedback (SVF) methods for PWR core control are presented in this paper. The MMP method is primarily intended for optimization of the core behaviour with respect to xenon induced power distribution effects in load cycle operation. The SVF method is most suited for xenon oscillation damping in situations where the core load is unpredictable or expected to stay constant. Results from simulation studies in which the two methods have been applied for control of simple PWR core models are presented. (orig./RW) [de

  7. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  8. An hp symplectic pseudospectral method for nonlinear optimal control

    Science.gov (United States)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  9. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    Science.gov (United States)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  10. Optimized iterative decoding method for TPC coded CPM

    Science.gov (United States)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  11. Method of optimization of the natural gas refining process

    Energy Technology Data Exchange (ETDEWEB)

    Sadykh-Zade, E.S.; Bagirov, A.A.; Mardakhayev, I.M.; Razamat, M.S.; Tagiyev, V.G.

    1980-01-01

    The SATUM (automatic control system of technical operations) system introduced at the Shatlyk field should assure good quality of gas refining. In order to optimize the natural gas refining processes and experimental-analytical method is used in compiling the mathematical descriptions. The program, compiled in Fortran language, in addition to parameters of optimal conditions gives information on the yield of concentrate and water, concentration and consumption of DEG, composition and characteristics of the gas and condensate. The algorithm for calculating optimum engineering conditions of gas refining is proposed to be used in ''advice'' mode, and also for monitoring progress of the gas refining process.

  12. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  13. A Survey of Methods for Gas-Lift Optimization

    Directory of Open Access Journals (Sweden)

    Kashif Rashid

    2012-01-01

    Full Text Available This paper presents a survey of methods and techniques developed for the solution of the continuous gas-lift optimization problem over the last two decades. These range from isolated single-well analysis all the way to real-time multivariate optimization schemes encompassing all wells in a field. While some methods are clearly limited due to their neglect of treating the effects of inter-dependent wells with common flow lines, other methods are limited due to the efficacy and quality of the solution obtained when dealing with large-scale networks comprising hundreds of difficult to produce wells. The aim of this paper is to provide an insight into the approaches developed and to highlight the challenges that remain.

  14. Kinoform design with an optimal-rotation-angle method.

    Science.gov (United States)

    Bengtsson, J

    1994-10-10

    Kinoforms (i.e., computer-generated phase holograms) are designed with a new algorithm, the optimalrotation- angle method, in the paraxial domain. This is a direct Fourier method (i.e., no inverse transform is performed) in which the height of the kinoform relief in each discrete point is chosen so that the diffraction efficiency is increased. The optimal-rotation-angle algorithm has a straightforward geometrical interpretation. It yields excellent results close to, or better than, those obtained with other state-of-the-art methods. The optimal-rotation-angle algorithm can easily be modified to take different restraints into account; as an example, phase-swing-restricted kinoforms, which distribute the light into a number of equally bright spots (so called fan-outs), were designed. The phase-swing restriction lowers the efficiency, but the uniformity can still be made almost perfect.

  15. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  16. The construction of optimal stated choice experiments theory and methods

    CERN Document Server

    Street, Deborah J

    2007-01-01

    The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...

  17. Optimal treatment cost allocation methods in pollution control

    International Nuclear Information System (INIS)

    Chen Wenying; Fang Dong; Xue Dazhi

    1999-01-01

    Total emission control is an effective pollution control strategy. However, Chinese application of total emission control lacks reasonable and fair methods for optimal treatment cost allocation, a critical issue in total emission control. The author considers four approaches to allocate treatment costs. The first approach is to set up a multiple-objective planning model and to solve the model using the shortest distance ideal point method. The second approach is to define degree of satisfaction for cost allocation results for each polluter and to establish a method based on this concept. The third is to apply bargaining and arbitration theory to develop a model. The fourth is to establish a cooperative N-person game model which can be solved using the Shapley value method, the core method, the Cost Gap Allocation method or the Minimum Costs-Remaining Savings method. These approaches are compared using a practicable case study

  18. Entropy method combined with extreme learning machine method for the short-term photovoltaic power generation forecasting

    International Nuclear Information System (INIS)

    Tang, Pingzhou; Chen, Di; Hou, Yushuo

    2016-01-01

    As the world’s energy problem becomes more severe day by day, photovoltaic power generation has opened a new door for us with no doubt. It will provide an effective solution for this severe energy problem and meet human’s needs for energy if we can apply photovoltaic power generation in real life, Similar to wind power generation, photovoltaic power generation is uncertain. Therefore, the forecast of photovoltaic power generation is very crucial. In this paper, entropy method and extreme learning machine (ELM) method were combined to forecast a short-term photovoltaic power generation. First, entropy method is used to process initial data, train the network through the data after unification, and then forecast electricity generation. Finally, the data results obtained through the entropy method with ELM were compared with that generated through generalized regression neural network (GRNN) and radial basis function neural network (RBF) method. We found that entropy method combining with ELM method possesses higher accuracy and the calculation is faster.

  19. Antarctic Temperature Extremes from MODIS Land Surface Temperatures: New Processing Methods Reveal Data Quality Puzzles

    Science.gov (United States)

    Grant, G.; Gallaher, D. W.

    2017-12-01

    New methods for processing massive remotely sensed datasets are used to evaluate Antarctic land surface temperature (LST) extremes. Data from the MODIS/Terra sensor (Collection 6) provides a twice-daily look at Antarctic LSTs over a 17 year period, at a higher spatiotemporal resolution than past studies. Using a data condensation process that creates databases of anomalous values, our processes create statistical images of Antarctic LSTs. In general, the results find few significant trends in extremes; however, they do reveal a puzzling picture of inconsistent cloud detection and possible systemic errors, perhaps due to viewing geometry. Cloud discrimination shows a distinct jump in clear-sky detections starting in 2011, and LSTs around the South Pole exhibit a circular cooling pattern, which may also be related to cloud contamination. Possible root causes are discussed. Ongoing investigations seek to determine whether the results are a natural phenomenon or, as seems likely, the results of sensor degradation or processing artefacts. If the unusual LST patterns or cloud detection discontinuities are natural, they point to new, interesting processes on the Antarctic continent. If the data artefacts are artificial, MODIS LST users should be alerted to the potential issues.

  20. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or

  1. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  2. First-principle optimal local pseudopotentials construction via optimized effective potential method

    International Nuclear Information System (INIS)

    Mi, Wenhui; Zhang, Shoutao; Wang, Yanchao; Ma, Yanming; Miao, Maosheng

    2016-01-01

    The local pseudopotential (LPP) is an important component of orbital-free density functional theory, a promising large-scale simulation method that can maintain information on a material’s electron state. The LPP is usually extracted from solid-state density functional theory calculations, thereby it is difficult to assess its transferability to cases involving very different chemical environments. Here, we reveal a fundamental relation between the first-principles norm-conserving pseudopotential (NCPP) and the LPP. On the basis of this relationship, we demonstrate that the LPP can be constructed optimally from the NCPP for a large number of elements using the optimized effective potential method. Specially, our method provides a unified scheme for constructing and assessing the LPP within the framework of first-principles pseudopotentials. Our practice reveals that the existence of a valid LPP with high transferability may strongly depend on the element.

  3. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  4. Optimal interpolation method for intercomparison of atmospheric measurements.

    Science.gov (United States)

    Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno

    2006-04-01

    Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.

  5. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  6. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  7. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  8. A Method for Correcting IMRT Optimizer Heterogeneity Dose Calculations

    International Nuclear Information System (INIS)

    Zacarias, Albert S.; Brown, Mellonie F.; Mills, Michael D.

    2010-01-01

    Radiation therapy treatment planning for volumes close to the patient's surface, in lung tissue and in the head and neck region, can be challenging for the planning system optimizer because of the complexity of the treatment and protected volumes, as well as striking heterogeneity corrections. Because it is often the goal of the planner to produce an isodose plan with uniform dose throughout the planning target volume (PTV), there is a need for improved planning optimization procedures for PTVs located in these anatomical regions. To illustrate such an improved procedure, we present a treatment planning case of a patient with a lung lesion located in the posterior right lung. The intensity-modulated radiation therapy (IMRT) plan generated using standard optimization procedures produced substantial dose nonuniformity across the tumor caused by the effect of lung tissue surrounding the tumor. We demonstrate a novel iterative method of dose correction performed on the initial IMRT plan to produce a more uniform dose distribution within the PTV. This optimization method corrected for the dose missing on the periphery of the PTV and reduced the maximum dose on the PTV to 106% from 120% on the representative IMRT plan.

  9. Design of large Francis turbine using optimal methods

    Science.gov (United States)

    Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.

    2012-11-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  10. Design of large Francis turbine using optimal methods

    International Nuclear Information System (INIS)

    Flores, E; Bornard, L; Tomas, L; Couston, M; Liu, J

    2012-01-01

    Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China −32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.

  11. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  12. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  13. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  14. A multidimensional pseudospectral method for optimal control of quantum ensembles

    International Nuclear Information System (INIS)

    Ruths, Justin; Li, Jr-Shin

    2011-01-01

    In our previous work, we have shown that the pseudospectral method is an effective and flexible computation scheme for deriving pulses for optimal control of quantum systems. In practice, however, quantum systems often exhibit variation in the parameters that characterize the system dynamics. This leads us to consider the control of an ensemble (or continuum) of quantum systems indexed by the system parameters that show variation. We cast the design of pulses as an optimal ensemble control problem and demonstrate a multidimensional pseudospectral method with several challenging examples of both closed and open quantum systems from nuclear magnetic resonance spectroscopy in liquid. We give particular attention to the ability to derive experimentally viable pulses of minimum energy or duration.

  15. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...

  16. Experimental methods for the analysis of optimization algorithms

    CERN Document Server

    Bartz-Beielstein, Thomas; Paquete, Luis; Preuss, Mike

    2010-01-01

    In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on diffe

  17. Methods of Choosing an Optimal Portfolio of Projects

    OpenAIRE

    Yakovlev, A.; Chernenko, M.

    2016-01-01

    This paper presents an analysis of existing methods for a portfolio of project optimization. The necessity for their improvement is shown. It is suggested to assess the portfolio of projects on the basis of the amount in the difference between the results and costs during development and implementation of selected projects and the losses caused by non-implementation or delayed implementation of projects that were not included in the portfolio. Consideration of capital and current costs compon...

  18. A Spectral Conjugate Gradient Method for Unconstrained Optimization

    International Nuclear Information System (INIS)

    Birgin, E. G.; Martinez, J. M.

    2001-01-01

    A family of scaled conjugate gradient algorithms for large-scale unconstrained minimization is defined. The Perry, the Polak-Ribiere and the Fletcher-Reeves formulae are compared using a spectral scaling derived from Raydan's spectral gradient optimization method. The best combination of formula, scaling and initial choice of step-length is compared against well known algorithms using a classical set of problems. An additional comparison involving an ill-conditioned estimation problem in Optics is presented

  19. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  20. An Invariant-Preserving ALE Method for Solids under Extreme Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Sambasivan, Shiv Kumar [Los Alamos National Laboratory; Christon, Mark A [Los Alamos National Laboratory

    2012-07-17

    We are proposing a fundamentally new approach to ALE methods for solids undergoing large deformation due to extreme loading conditions. Our approach is based on a physically-motivated and mathematically rigorous construction of the underlying Lagrangian method, vector/tensor reconstruction, remapping, and interface reconstruction. It is transformational because it deviates dramatically from traditionally accepted ALE methods and provides the following set of unique attributes: (1) a three-dimensional, finite volume, cell-centered ALE framework with advanced hypo-/hyper-elasto-plastic constitutive theories for solids; (2) a new physically and mathematically consistent reconstruction method for vector/tensor fields; (3) advanced invariant-preserving remapping algorithm for vector/tensor quantities; (4) moment-of-fluid (MoF) interface reconstruction technique for multi-material problems with solids undergoing large deformations. This work brings together many new concepts, that in combination with emergent cell-centered Lagrangian hydrodynamics methods will produce a cutting-edge ALE capability and define a new state-of-the-art. Many ideas in this work are new, completely unexplored, and hence high risk. The proposed research and the resulting algorithms will be of immediate use in Eulerian, Lagrangian and ALE codes under the ASC program at the lab. In addition, the research on invariant preserving reconstruction/remap of tensor quantities is of direct interest to ongoing CASL and climate modeling efforts at LANL. The application space impacted by this work includes Inertial Confinement Fusion (ICF), Z-pinch, munition-target interactions, geological impact dynamics, shock processing of powders and shaped charges. The ALE framework will also provide a suitable test-bed for rapid development and assessment of hypo-/hyper-elasto-plastic constitutive theories. Today, there are no invariant-preserving ALE algorithms for treating solids with large deformations. Therefore

  1. An Optimized Method for Terrain Reconstruction Based on Descent Images

    Directory of Open Access Journals (Sweden)

    Xu Xinchao

    2016-02-01

    Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.

  2. A Global Network Alignment Method Using Discrete Particle Swarm Optimization.

    Science.gov (United States)

    Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia

    2016-10-19

    Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.

  3. A seismic fault recognition method based on ant colony optimization

    Science.gov (United States)

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  4. One directional polarized neutron reflectometry with optimized reference layer method

    International Nuclear Information System (INIS)

    Masoudi, S. Farhad; Jahromi, Saeed S.

    2012-01-01

    In the past decade, several neutron reflectometry methods for determining the modulus and phase of the complex reflection coefficient of an unknown multilayer thin film have been worked out among which the method of variation of surroundings and reference layers are of highest interest. These methods were later modified for measurement of the polarization of the reflected beam instead of the measurement of the intensities. In their new architecture, these methods not only suffered from the necessity of change of experimental setup but also another difficulty was added to their experimental implementations. This deficiency was related to the limitations of the technology of the neutron reflectometers that could only measure the polarization of the reflected neutrons in the same direction as the polarization of the incident beam. As the instruments are limited, the theory has to be optimized so that the experiment could be performed. In a recent work, we developed the method of variation of surroundings for one directional polarization analysis. In this new work, the method of reference layer with polarization analysis has been optimized to determine the phase and modulus of the unknown film with measurement of the polarization of the reflected neutrons in the same direction as the polarization of the incident beam.

  5. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    Science.gov (United States)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  6. Comparison of optimization methods for electronic-structure calculations

    International Nuclear Information System (INIS)

    Garner, J.; Das, S.G.; Min, B.I.; Woodward, C.; Benedek, R.

    1989-01-01

    The performance of several local-optimization methods for calculating electronic structure is compared. The fictitious first-order equation of motion proposed by Williams and Soler is integrated numerically by three procedures: simple finite-difference integration, approximate analytical integration (the Williams-Soler algorithm), and the Born perturbation series. These techniques are applied to a model problem for which exact solutions are known, the Mathieu equation. The Williams-Soler algorithm and the second Born approximation converge equally rapidly, but the former involves considerably less computational effort and gives a more accurate converged solution. Application of the method of conjugate gradients to the Mathieu equation is discussed

  7. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-04-01

    Nuclear loading pattern elaboration can be seen as a combinational optimization problem of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this paper presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes

  8. Method for depleting BWRs using optimal control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonics calculations

  9. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  10. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  11. Methods and Model Dependency of Extreme Event Attribution: The 2015 European Drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Vautard, Robert; van Oldenborgh, Geert J.; Wilcox, Laura; Seneviratne, Sonia I.

    2017-10-01

    Science on the role of anthropogenic influence on extreme weather events, such as heatwaves or droughts, has evolved rapidly in the past years. The approach of "event attribution" compares the occurrence-probability of an event in the present, factual climate with its probability in a hypothetical, counterfactual climate without human-induced climate change. Several methods can be used for event attribution, based on climate model simulations and observations, and usually researchers only assess a subset of methods and data sources. Here, we explore the role of methodological choices for the attribution of the 2015 meteorological summer drought in Europe. We present contradicting conclusions on the relevance of human influence as a function of the chosen data source and event attribution methodology. Assessments using the maximum number of models and counterfactual climates with pre-industrial greenhouse gas concentrations point to an enhanced drought risk in Europe. However, other evaluations show contradictory evidence. These results highlight the need for a multi-model and multi-method framework in event attribution research, especially for events with a low signal-to-noise ratio and high model dependency such as regional droughts.

  12. Impact of Optimized Land Surface Parameters on the Land-Atmosphere Coupling in WRF Simulations of Dry and Wet Extremes

    Science.gov (United States)

    Kumar, S.; Santanello, J. A.; Peters-Lidard, C. D.; Harrison, K.

    2011-12-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty module in NASA's Land Information System (LIS-OPT), whereby parameter sets are calibrated in the Noah land surface model and classified according to the land cover and soil type mapping of the observations and the full domain. The impact of the calibrated parameters on the a) spinup of land surface states used as initial conditions, and b) heat and moisture fluxes of the coupled (LIS-WRF) simulations are then assessed in terms of ambient weather, PBL budgets, and precipitation along with L-A coupling diagnostics. In addition, the sensitivity of this approach to the period of calibration (dry, wet, normal) is investigated. Finally, tradeoffs of computational tractability and scientific validity (e.g.,. relating to the representation of the spatial dependence of parameters) and the feasibility of calibrating to multiple observational datasets are also discussed.

  13. Optimization and modification of the method for detection of rhamnolipids

    Directory of Open Access Journals (Sweden)

    Takeshi Tabuchi

    2015-10-01

    Full Text Available Use of biosurfactants in bioremediation, facilitates and accelerates microbial degradation of hydrocarbons. CTAB/MB agar method created by Siegmund & Wagner for screening of rhamnolipids (RL producing strains, has been widely used but has not improved significantly for more than 20 years. To optimize the technique as a quantitative method, CTAB/MB agar plates were made and different variables were tested, like incubation time, cooling, CTAB concentration, methylene blue presence, wells diameter and inocula volume. Furthermore, a new method for RL detection within halos was developed: precipitation of RL with HCl, allows the formation a new halos pattern, easier to observe and to measure. This research reaffirm that this method is not totally suitable for a fine quantitative analysis, because of the difficulty to accurately correlate RL concentration and the area of the halos. RL diffusion does not seem to have a simple behavior and there are a lot of factors that affect RL migration rate.

  14. Optimized optical clearing method for imaging central nervous system

    Science.gov (United States)

    Yu, Tingting; Qi, Yisong; Gong, Hui; Luo, Qingming; Zhu, Dan

    2015-03-01

    The development of various optical clearing methods provides a great potential for imaging entire central nervous system by combining with multiple-labelling and microscopic imaging techniques. These methods had made certain clearing contributions with respective weaknesses, including tissue deformation, fluorescence quenching, execution complexity and antibody penetration limitation that makes immunostaining of tissue blocks difficult. The passive clarity technique (PACT) bypasses those problems and clears the samples with simple implementation, excellent transparency with fine fluorescence retention, but the passive tissue clearing method needs too long time. In this study, we not only accelerate the clearing speed of brain blocks but also preserve GFP fluorescence well by screening an optimal clearing temperature. The selection of proper temperature will make PACT more applicable, which evidently broaden the application range of this method.

  15. THE SYNTHETIC-OVERSAMPLING METHOD: USING PHOTOMETRIC COLORS TO DISCOVER EXTREMELY METAL-POOR STARS

    Energy Technology Data Exchange (ETDEWEB)

    Miller, A. A., E-mail: amiller@astro.caltech.edu [Jet Propulsion Laboratory, 4800 Oak Grove Drive, MS 169-506, Pasadena, CA 91109 (United States)

    2015-09-20

    Extremely metal-poor (EMP) stars ([Fe/H] ≤ −3.0 dex) provide a unique window into understanding the first generation of stars and early chemical enrichment of the universe. EMP stars are exceptionally rare, however, and the relatively small number of confirmed discoveries limits our ability to exploit these near-field probes of the first ∼500 Myr after the Big Bang. Here, a new method to photometrically estimate [Fe/H] from only broadband photometric colors is presented. I show that the method, which utilizes machine-learning algorithms and a training set of ∼170,000 stars with spectroscopically measured [Fe/H], produces a typical scatter of ∼0.29 dex. This performance is similar to what is achievable via low-resolution spectroscopy, and outperforms other photometric techniques, while also being more general. I further show that a slight alteration to the model, wherein synthetic EMP stars are added to the training set, yields the robust identification of EMP candidates. In particular, this synthetic-oversampling method recovers ∼20% of the EMP stars in the training set, at a precision of ∼0.05. Furthermore, ∼65% of the false positives from the model are very metal-poor stars ([Fe/H] ≤ −2.0 dex). The synthetic-oversampling method is biased toward the discovery of warm (∼F-type) stars, a consequence of the targeting bias from the Sloan Digital Sky Survey/Sloan Extension for Galactic Understanding survey. This EMP selection method represents a significant improvement over alternative broadband optical selection techniques. The models are applied to >12 million stars, with an expected yield of ∼600 new EMP stars, which promises to open new avenues for exploring the early universe.

  16. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  17. Newton-type methods for optimization and variational problems

    CERN Document Server

    Izmailov, Alexey F

    2014-01-01

    This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...

  18. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  19. Experimental Methods for the Analysis of Optimization Algorithms

    DEFF Research Database (Denmark)

    , computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different...... in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment......In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However...

  20. Buried Object Detection Method Using Optimum Frequency Range in Extremely Shallow Underground

    Science.gov (United States)

    Sugimoto, Tsuneyoshi; Abe, Touma

    2011-07-01

    We propose a new detection method for buried objects using the optimum frequency response range of the corresponding vibration velocity. Flat speakers and a scanning laser Doppler vibrometer (SLDV) are used for noncontact acoustic imaging in the extremely shallow underground. The exploration depth depends on the sound pressure, but it is usually less than 10 cm. Styrofoam, wood (silver fir), and acrylic boards of the same size, different size styrofoam boards, a hollow toy duck, a hollow plastic container, a plastic container filled with sand, a hollow steel can and an unglazed pot are used as buried objects which are buried in sand to about 2 cm depth. The imaging procedure of buried objects using the optimum frequency range is given below. First, the standardized difference from the average vibration velocity is calculated for all scan points. Next, using this result, underground images are made using a constant frequency width to search for the frequency response range of the buried object. After choosing an approximate frequency response range, the difference between the average vibration velocity for all points and that for several points that showed a clear response is calculated for the final confirmation of the optimum frequency range. Using this optimum frequency range, we can obtain the clearest image of the buried object. From the experimental results, we confirmed the effectiveness of our proposed method. In particular, a clear image of the buried object was obtained when the SLDV image was unclear.

  1. Structural Damage Detection using Frequency Response Function Index and Surrogate Model Based on Optimized Extreme Learning Machine Algorithm

    Directory of Open Access Journals (Sweden)

    R. Ghiasi

    2017-09-01

    Full Text Available Utilizing surrogate models based on artificial intelligence methods for detecting structural damages has attracted the attention of many researchers in recent decades. In this study, a new kernel based on Littlewood-Paley Wavelet (LPW is proposed for Extreme Learning Machine (ELM algorithm to improve the accuracy of detecting multiple damages in structural systems.  ELM is used as metamodel (surrogate model of exact finite element analysis of structures in order to efficiently reduce the computational cost through updating process. In the proposed two-step method, first a damage index, based on Frequency Response Function (FRF of the structure, is used to identify the location of damages. In the second step, the severity of damages in identified elements is detected using ELM. In order to evaluate the efficacy of ELM, the results obtained from the proposed kernel were compared with other kernels proposed for ELM as well as Least Square Support Vector Machine algorithm. The solved numerical problems indicated that ELM algorithm accuracy in detecting structural damages is increased drastically in case of using LPW kernel.

  2. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  3. Spectral Analysis of Large Finite Element Problems by Optimization Methods

    Directory of Open Access Journals (Sweden)

    Luca Bergamaschi

    1994-01-01

    Full Text Available Recently an efficient method for the solution of the partial symmetric eigenproblem (DACG, deflated-accelerated conjugate gradient was developed, based on the conjugate gradient (CG minimization of successive Rayleigh quotients over deflated subspaces of decreasing size. In this article four different choices of the coefficient βk required at each DACG iteration for the computation of the new search direction Pk are discussed. The “optimal” choice is the one that yields the same asymptotic convergence rate as the CG scheme applied to the solution of linear systems. Numerical results point out that the optimal βk leads to a very cost effective algorithm in terms of CPU time in all the sample problems presented. Various preconditioners are also analyzed. It is found that DACG using the optimal βk and (LLT−1 as a preconditioner, L being the incomplete Cholesky factor of A, proves a very promising method for the partial eigensolution. It appears to be superior to the Lanczos method in the evaluation of the 40 leftmost eigenpairs of five finite element problems, and particularly for the largest problem, with size equal to 4560, for which the speed gain turns out to fall between 2.5 and 6.0, depending on the eigenpair level.

  4. An Optimal Method for Developing Global Supply Chain Management System

    Directory of Open Access Journals (Sweden)

    Hao-Chun Lu

    2013-01-01

    Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.

  5. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  6. Comparison between statistical and optimization methods in accessing unmixing of spectrally similar materials

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-11-01

    Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...

  7. Optimal Control for Bufferbloat Queue Management Using Indirect Method with Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Amr Radwan

    2016-01-01

    Full Text Available Because memory buffers become larger and cheaper, they have been put into network devices to reduce the number of loss packets and improve network performance. However, the consequences of large buffers are long queues at network bottlenecks and throughput saturation, which has been recently noticed in research community as bufferbloat phenomenon. To address such issues, in this article, we design a forward-backward optimal control queue algorithm based on an indirect approach with parametric optimization. The cost function which we want to minimize represents a trade-off between queue length and packet loss rate performance. Through the integration of an indirect approach with parametric optimization, our proposal has advantages of scalability and accuracy compared to direct approaches, while still maintaining good throughput and shorter queue length than several existing queue management algorithms. All numerical analysis, simulation in ns-2, and experiment results are provided to solidify the efficiency of our proposal. In detailed comparisons to other conventional algorithms, the proposed procedure can run much faster than direct collocation methods while maintaining a desired short queue (≈40 packets in simulation and 80 (ms in experiment test.

  8. Manual muscle testing: a method of measuring extremity muscle strength applied to critically ill patients.

    Science.gov (United States)

    Ciesla, Nancy; Dinglas, Victor; Fan, Eddy; Kho, Michelle; Kuramoto, Jill; Needham, Dale

    2011-04-12

    Survivors of acute respiratory distress syndrome (ARDS) and other causes of critical illness often have generalized weakness, reduced exercise tolerance, and persistent nerve and muscle impairments after hospital discharge. Using an explicit protocol with a structured approach to training and quality assurance of research staff, manual muscle testing (MMT) is a highly reliable method for assessing strength, using a standardized clinical examination, for patients following ARDS, and can be completed with mechanically ventilated patients who can tolerate sitting upright in bed and are able to follow two-step commands. (7, 8) This video demonstrates a protocol for MMT, which has been taught to ≥ 43 research staff who have performed >800 assessments on >280 ARDS survivors. Modifications for the bedridden patient are included. Each muscle is tested with specific techniques for positioning, stabilization, resistance, and palpation for each score of the 6-point ordinal Medical Research Council scale. Three upper and three lower extremity muscles are graded in this protocol: shoulder abduction, elbow flexion, wrist extension, hip flexion, knee extension, and ankle dorsiflexion. These muscles were chosen based on the standard approach for evaluating patients for ICU-acquired weakness used in prior publications. (1,2).

  9. Novel method of finding extreme edges in a convex set of N-dimension vectors

    Science.gov (United States)

    Hu, Chia-Lun J.

    2001-11-01

    As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

  10. An Optimization Method for Virtual Globe Ocean Surface Dynamic Visualization

    Directory of Open Access Journals (Sweden)

    HUANG Wumeng

    2016-12-01

    Full Text Available The existing visualization method in the virtual globe mainly uses the projection grid to organize the ocean grid. This special grid organization has the defects in reflecting the difference characteristics of different ocean areas. The method of global ocean visualization based on global discrete grid can make up the defect of the projection grid method by matching with the discrete space of the virtual globe, so it is more suitable for the virtual ocean surface simulation application.But the available global discrete grids method has many problems which limiting its application such as the low efficiency of rendering and loading, the need of repairing grid crevices. To this point, we propose an optimization for the global discrete grids method. At first, a GPU-oriented multi-scale grid model of ocean surface which develops on the foundation of global discrete grids was designed to organize and manage the ocean surface grids. Then, in order to achieve the wind-drive wave dynamic rendering, this paper proposes a dynamic wave rendering method based on the multi-scale ocean surface grid model to support real-time wind field updating. At the same time, considering the effect of repairing grid crevices on the system efficiency, this paper presents an efficient method for repairing ocean surface grid crevices based on the characteristics of ocean grid and GPU technology. At last, the feasibility and validity of the method are verified by the comparison experiment. The experimental results show that the proposed method is efficient, stable and fast, and can compensate for the lack of function of the existing methods, so the application range is more extensive.

  11. Practical optimization of Steiner trees via the cavity method

    Science.gov (United States)

    Braunstein, Alfredo; Muntoni, Anna

    2016-07-01

    The optimization version of the cavity method for single instances, called Max-Sum, has been applied in the past to the minimum Steiner tree problem on graphs and variants. Max-Sum has been shown experimentally to give asymptotically optimal results on certain types of weighted random graphs, and to give good solutions in short computation times for some types of real networks. However, the hypotheses behind the formulation and the cavity method itself limit substantially the class of instances on which the approach gives good results (or even converges). Moreover, in the standard model formulation, the diameter of the tree solution is limited by a predefined bound, that affects both computation time and convergence properties. In this work we describe two main enhancements to the Max-Sum equations to be able to cope with optimization of real-world instances. First, we develop an alternative ‘flat’ model formulation that allows the relevant configuration space to be reduced substantially, making the approach feasible on instances with large solution diameter, in particular when the number of terminal nodes is small. Second, we propose an integration between Max-Sum and three greedy heuristics. This integration allows Max-Sum to be transformed into a highly competitive self-contained algorithm, in which a feasible solution is given at each step of the iterative procedure. Part of this development participated in the 2014 DIMACS Challenge on Steiner problems, and we report the results here. The performance on the challenge of the proposed approach was highly satisfactory: it maintained a small gap to the best bound in most cases, and obtained the best results on several instances in two different categories. We also present several improvements with respect to the version of the algorithm that participated in the competition, including new best solutions for some of the instances of the challenge.

  12. Optimization method for dimensioning a geological HLW waste repository

    International Nuclear Information System (INIS)

    Ouvrier, N.; Chaudon, L.; Malherbe, L.

    1990-01-01

    This method was developed by the CEA to optimize the dimensions of a geological repository by taking account of technical and economic parameters. It involves optimizing radioactive waste storage conditions on the basis of economic criteria with allowance for specified thermal constraints. The results are intended to identify trends and guide the choice from among available options: simple and highly flexible models were therefore used in this study, and only nearfield thermal constraints were taken into consideration. Because of the present uncertainty on the physicochemical properties of the repository environment and on the unit cost figures, this study focused on developing a suitable method rather than on obtaining definitive results. The optimum values found for the two media investigated (granite and salt) show that it is advisable to minimize the interim storage time, implying the containers must be separated by buffer material, whereas vertical spacing may not be required after a 30-year interim storage period. Moreover, the boreholes should be as deep as possible, on a close pitch in widely spaced handling drifts. These results depend to a considerable extent on the assumption of high interim storage costs

  13. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  14. Advanced quantitative methods in correlating sarcopenic muscle degeneration with lower extremity function biometrics and comorbidities.

    Science.gov (United States)

    Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo

    2018-01-01

    Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (Pbiometrics, SCHOL, and BMI, and particularly highlight the value of the

  15. ARSTEC, Nonlinear Optimization Program Using Random Search Method

    International Nuclear Information System (INIS)

    Rasmuson, D. M.; Marshall, N. H.

    1979-01-01

    1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays

  16. Nuclear fuel cycle optimization - methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)

  17. A discrete optimization method for nuclear fuel management

    International Nuclear Information System (INIS)

    Argaud, J.P.

    1993-04-01

    Nuclear loading pattern elaboration can be seen as a combinational optimization problem, of tremendous size and with non-linear cost-functions, and search are always numerically expensive. After a brief introduction of the main aspects of nuclear fuel management, this note presents a new idea to treat the combinational problem by using informations included in the gradient of a cost function. The method is to choose, by direct observation of the gradient, the more interesting changes in fuel loading patterns. An example is then developed to illustrate an operating mode of the method, and finally, connections with simulated annealing and genetic algorithms are described as an attempt to improve search processes. (author). 1 fig., 16 refs

  18. Optimal PMU placement using topology transformation method in power systems

    Directory of Open Access Journals (Sweden)

    Nadia H.A. Rahman

    2016-09-01

    Full Text Available Optimal phasor measurement units (PMUs placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP. The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  19. Optimal PMU placement using topology transformation method in power systems.

    Science.gov (United States)

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  20. A hydro-meteorological model chain to assess the influence of natural variability and impacts of climate change on extreme events and propose optimal water management

    Science.gov (United States)

    von Trentini, F.; Willkofer, F.; Wood, R. R.; Schmid, F. J.; Ludwig, R.

    2017-12-01

    The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. Therefore, a hydro-meteorological model chain is applied. It employs high performance computing capacity of the Leibniz Supercomputing Centre facility SuperMUC to dynamically downscale 50 members of the Global Circulation Model CanESM2 over European and Eastern North American domains using the Canadian Regional Climate Model (RCM) CRCM5. Over Europe, the unique single model ensemble is conjointly analyzed with the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change in the dynamics of extreme events. Furthermore, these 50 members of a single RCM will enhance extreme value statistics (extreme return periods) by exploiting the available 1500 model years for the reference period from 1981 to 2010. Hence, the RCM output is applied to drive the process based, fully distributed, and deterministic hydrological model WaSiM in high temporal (3h) and spatial (500m) resolution. WaSiM and the large ensemble are further used to derive a variety of hydro-meteorological patterns leading to severe flood events. A tool for virtual perfect prediction shall provide a combination of optimal lead time and management strategy to mitigate certain flood events following these patterns.

  1. Estimating the impact of extreme events on crude oil price. An EMD-based event analysis method

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shouyang; Yu, Lean; Lai, Kin Keung

    2009-01-01

    The impact of extreme events on crude oil markets is of great importance in crude oil price analysis due to the fact that those events generally exert strong impact on crude oil markets. For better estimation of the impact of events on crude oil price volatility, this study attempts to use an EMD-based event analysis approach for this task. In the proposed method, the time series to be analyzed is first decomposed into several intrinsic modes with different time scales from fine-to-coarse and an average trend. The decomposed modes respectively capture the fluctuations caused by the extreme event or other factors during the analyzed period. It is found that the total impact of an extreme event is included in only one or several dominant modes, but the secondary modes provide valuable information on subsequent factors. For overlapping events with influences lasting for different periods, their impacts are separated and located in different modes. For illustration and verification purposes, two extreme events, the Persian Gulf War in 1991 and the Iraq War in 2003, are analyzed step by step. The empirical results reveal that the EMD-based event analysis method provides a feasible solution to estimating the impact of extreme events on crude oil prices variation. (author)

  2. Estimating the impact of extreme events on crude oil price. An EMD-based event analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xun; Wang, Shouyang [Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); School of Mathematical Sciences, Graduate University of Chinese Academy of Sciences, Beijing 100190 (China); Yu, Lean [Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung [Department of Management Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon (China)

    2009-09-15

    The impact of extreme events on crude oil markets is of great importance in crude oil price analysis due to the fact that those events generally exert strong impact on crude oil markets. For better estimation of the impact of events on crude oil price volatility, this study attempts to use an EMD-based event analysis approach for this task. In the proposed method, the time series to be analyzed is first decomposed into several intrinsic modes with different time scales from fine-to-coarse and an average trend. The decomposed modes respectively capture the fluctuations caused by the extreme event or other factors during the analyzed period. It is found that the total impact of an extreme event is included in only one or several dominant modes, but the secondary modes provide valuable information on subsequent factors. For overlapping events with influences lasting for different periods, their impacts are separated and located in different modes. For illustration and verification purposes, two extreme events, the Persian Gulf War in 1991 and the Iraq War in 2003, are analyzed step by step. The empirical results reveal that the EMD-based event analysis method provides a feasible solution to estimating the impact of extreme events on crude oil prices variation. (author)

  3. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  4. Optimized t-expansion method for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Travenec, Igor; Samaj, Ladislav

    2011-01-01

    A polemic arose recently about the applicability of the t-expansion method to the calculation of the ground state energy E 0 of the Rabi model. For specific choices of the trial function and very large number of involved connected moments, the t-expansion results are rather poor and exhibit considerable oscillations. In this Letter, we formulate the t-expansion method for trial functions containing two free parameters which capture two exactly solvable limits of the Rabi Hamiltonian. At each order of the t-series, E 0 is assumed to be stationary with respect to the free parameters. A high accuracy of E 0 estimates is achieved for small numbers (5 or 6) of involved connected moments, the relative error being smaller than 10 -4 (0.01%) within the whole parameter space of the Rabi Hamiltonian. A special symmetrization of the trial function enables us to calculate also the first excited energy E 1 , with the relative error smaller than 10 -2 (1%). -- Highlights: → We study the ground state energy of the Rabi Hamiltonian. → We use the t-expansion method with an optimized trial function. → High accuracy of estimates is achieved, the relative error being smaller than 0.01%. → The calculation of the first excited state energy is made. The method has a general applicability.

  5. Investigation on multi-objective performance optimization algorithm application of fan based on response surface method and entropy method

    Science.gov (United States)

    Zhang, Li; Wu, Kexin; Liu, Yang

    2017-12-01

    A multi-objective performance optimization method is proposed, and the problem that single structural parameters of small fan balance the optimization between the static characteristics and the aerodynamic noise is solved. In this method, three structural parameters are selected as the optimization variables. Besides, the static pressure efficiency and the aerodynamic noise of the fan are regarded as the multi-objective performance. Furthermore, the response surface method and the entropy method are used to establish the optimization function between the optimization variables and the multi-objective performances. Finally, the optimized model is found when the optimization function reaches its maximum value. Experimental data shows that the optimized model not only enhances the static characteristics of the fan but also obviously reduces the noise. The results of the study will provide some reference for the optimization of multi-objective performance of other types of rotating machinery.

  6. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  7. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  8. A Top Pilot Tunnel Preconditioning Method for the Prevention of Extremely Intense Rockbursts in Deep Tunnels Excavated by TBMs

    Science.gov (United States)

    Zhang, Chuanqing; Feng, Xiating; Zhou, Hui; Qiu, Shili; Wu, Wenping

    2012-05-01

    The headrace tunnels at the Jinping II Hydropower Station cross the Jinping Mountain with a maximum overburden depth of 2,525 m, where 80% of the strata along the tunnels consist of marble. A number of extremely intense rockbursts occurred during the excavation of the auxiliary tunnels and the drainage tunnel. In particular, a tunnel boring machine (TBM) was destroyed by an extremely intense rockburst in a 7.2-m-diameter drainage tunnel. Two of the four subsequent 12.4-m-diameter headrace tunnels will be excavated with larger size TBMs, where a high risk of extremely intense rockbursts exists. Herein, a top pilot tunnel preconditioning method is proposed to minimize this risk, in which a drilling and blasting method is first recommended for the top pilot tunnel excavation and support, and then the TBM excavation of the main tunnel is conducted. In order to evaluate the mechanical effectiveness of this method, numerical simulation analyses using the failure approaching index, energy release rate, and excess shear stress indices are carried out. Its construction feasibility is discussed as well. Moreover, a microseismic monitoring technique is used in the experimental tunnel section for the real-time monitoring of the microseismic activities of the rock mass in TBM excavation and for assessing the effect of the top pilot tunnel excavation in reducing the risk of rockbursts. This method is applied to two tunnel sections prone to extremely intense rockbursts and leads to a reduction in the risk of rockbursts in TBM excavation.

  9. 16 CFR 1500.45 - Method for determining extremely flammable and flammable contents of self-pressurized containers.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Method for determining extremely flammable and flammable contents of self-pressurized containers. 1500.45 Section 1500.45 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS HAZARDOUS SUBSTANCES AND...

  10. Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method

    Science.gov (United States)

    Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong

    2018-06-01

    In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.

  11. Convex functions and optimization methods on Riemannian manifolds

    CERN Document Server

    Udrişte, Constantin

    1994-01-01

    This unique monograph discusses the interaction between Riemannian geometry, convex programming, numerical analysis, dynamical systems and mathematical modelling. The book is the first account of the development of this subject as it emerged at the beginning of the 'seventies. A unified theory of convexity of functions, dynamical systems and optimization methods on Riemannian manifolds is also presented. Topics covered include geodesics and completeness of Riemannian manifolds, variations of the p-energy of a curve and Jacobi fields, convex programs on Riemannian manifolds, geometrical constructions of convex functions, flows and energies, applications of convexity, descent algorithms on Riemannian manifolds, TC and TP programs for calculations and plots, all allowing the user to explore and experiment interactively with real life problems in the language of Riemannian geometry. An appendix is devoted to convexity and completeness in Finsler manifolds. For students and researchers in such diverse fields as pu...

  12. Optimizing the radioimmunologic determination methods for cortisol and calcitonin

    International Nuclear Information System (INIS)

    Stalla, G.

    1981-01-01

    In order to build up a specific 125-iodine cortisol radioimmunoassay (RIA) pure cortisol-3(0-carbodxymethyl) oxim was synthesized for teh production of antigens and tracers. The cortisol was coupled with tyrosin methylester and then labelled with 125-iodine. For the antigen production the cortisol derivate was coupled with the same method to thyreoglobulin. The major part of the antisera, which were obtained like this, presented high titres. Apart from a high specificity for cortisol a high affinity was found in the acid pH-area and quantified with a particularly developed computer program. An extractive step in the cortisol RIA could be prevented by efforts. The assay was carried out with an optimized double antibody principle: The reaction time between the first and the second antiserum was considerably accelerated by the administration of polyaethylenglycol. The assay can be carried out automatically by applying a modular analysis system, which operates fast and provides a large capacity. The required quality and accuracy controls were done. The comparison of this assay with other cortisol-RIA showed good correlation. The RIA for human clacitonin was improved. For separating bound and freely mobile hormones the optimized double-antibody technique was applied. The antiserum was examined with respect to its affinity to calcitonin. For the 'zero serum' production the Florisil extraction method was used. The criteria of the quality and accuracy controls were complied. Significantly increased calcitonin concentrations were found in a patient group with medullar thyroid carcinoma and in two patients with an additional phaechromocytoma. (orig./MG) [de

  13. Estimating extremes in climate change simulations using the peaks-over-threshold method with a non-stationary threshold

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan; Picek, J.; Beranová, Romana

    2010-01-01

    Roč. 72, 1-2 (2010), s. 55-68 ISSN 0921-8181 R&D Projects: GA ČR GA205/06/1535; GA ČR GAP209/10/2045 Grant - others:GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z30420517 Keywords : climate change * extreme value analysis * global climate models * peaks-over-threshold method * peaks-over-quantile regression * quantile regression * Poisson process * extreme temperatures Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.351, year: 2010

  14. Validation of a method for radionuclide activity optimize in SPECT

    International Nuclear Information System (INIS)

    Perez Diaz, M.; Diaz Rizo, O.; Lopez Diaz, A.; Estevez Aparicio, E.; Roque Diaz, R.

    2007-01-01

    A discriminant method for optimizing the activity administered in NM studies is validated by comparison with ROC curves. the method is tested in 21 SPECT, performed with a Cardiac phantom. Three different cold lesions (L1, L2 and L3) were placed in the myocardium-wall for each SPECT. Three activities (84 MBq, 37 MBq or 18.5 MBq) of Tc-99m diluted in water were used as background. The linear discriminant analysis was used to select the parameters that characterize image quality (Background-to-Lesion (B/L) and Signal-to-Noise (S/N) ratios). Two clusters with different image quality (p=0.021) were obtained following the selected variables. the first one involved the studies performed with 37 MBq and 84 MBq, and the second one included the studies with 18.5 MBq. the ratios B/L, B/L2 and B/L3 are the parameters capable to construct the function, with 100% of cases correctly classified into the clusters. The value of 37 MBq is the lowest tested activity for which good results for the B/Li variables were obtained,without significant differences from the results with 84 MBq (p>0.05). The result is coincident with the applied ROC-analysis. A correlation between both method of r=890 was obtained. (Author) 26 refs

  15. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    Science.gov (United States)

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    Science.gov (United States)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  17. Improved Extreme-Scenario Extraction Method For The Economic Dispatch Of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2017-01-01

    ) of active distribution network with renewables. The extreme scenarios are selected from the historical data using the improved minimum volume enclosing ellipsoid (MVEE) algorithm to guarantee the security of system operation while avoid frequently switching the transformer tap. It is theoretically proved...

  18. Towards a unified study of extreme events using universality concepts and transdisciplinary analysis methods

    Science.gov (United States)

    Balasis, George; Donner, Reik V.; Donges, Jonathan F.; Radebach, Alexander; Eftaxias, Konstantinos; Kurths, Jürgen

    2013-04-01

    The dynamics of many complex systems is characterized by the same universal principles. In particular, systems which are otherwise quite different in nature show striking similarities in their behavior near tipping points (bifurcations, phase transitions, sudden regime shifts) and associated extreme events. Such critical phenomena are frequently found in diverse fields such as climate, seismology, or financial markets. Notably, the observed similarities include a high degree of organization, persistent behavior, and accelerated energy release, which are common to (among others) phenomena related to geomagnetic variability of the terrestrial magnetosphere (intense magnetic storms), seismic activity (electromagnetic emissions prior to earthquakes), solar-terrestrial physics (solar flares), neurophysiology (epileptic seizures), and socioeconomic systems (stock market crashes). It is an open question whether the spatial and temporal complexity associated with extreme events arises from the system's structural organization (geometry) or from the chaotic behavior inherent to the nonlinear equations governing the dynamics of these phenomena. On the one hand, the presence of scaling laws associated with earthquakes and geomagnetic disturbances suggests understanding these events as generalized phase transitions similar to nucleation and critical phenomena in thermal and magnetic systems. On the other hand, because of the structural organization of the systems (e.g., as complex networks) the associated spatial geometry and/or topology of interactions plays a fundamental role in the emergence of extreme events. Here, a few aspects of the interplay between geometry and dynamics (critical phase transitions) that could result in the emergence of extreme events, which is an open problem, will be discussed.

  19. Reliability-based design methods to determine the extreme response distribution of offshore wind turbines

    NARCIS (Netherlands)

    Cheng, P.W.; Bussel, van G.J.W.; Kuik, van G.A.M.; Vugts, J.H.

    2003-01-01

    In this article a reliability-based approach to determine the extreme response distribution of offshore wind turbines is presented. Based on hindcast data, the statistical description of the offshore environment is formulated. The contour lines of different return periods can be determined.

  20. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  1. Methods for Optimizing CRISPR-Cas9 Genome Editing Specificity

    Science.gov (United States)

    Tycko, Josh; Myer, Vic E.; Hsu, Patrick D.

    2016-01-01

    Summary Advances in the development of delivery, repair, and specificity strategies for the CRISPR-Cas9 genome engineering toolbox are helping researchers understand gene function with unprecedented precision and sensitivity. CRISPR-Cas9 also holds enormous therapeutic potential for the treatment of genetic disorders by directly correcting disease-causing mutations. Although the Cas9 protein has been shown to bind and cleave DNA at off-target sites, the field of Cas9 specificity is rapidly progressing with marked improvements in guide RNA selection, protein and guide engineering, novel enzymes, and off-target detection methods. We review important challenges and breakthroughs in the field as a comprehensive practical guide to interested users of genome editing technologies, highlighting key tools and strategies for optimizing specificity. The genome editing community should now strive to standardize such methods for measuring and reporting off-target activity, while keeping in mind that the goal for specificity should be continued improvement and vigilance. PMID:27494557

  2. Experimental evaluation of optimization method for developing ultraviolet barrier coatings

    Science.gov (United States)

    Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao

    2014-01-01

    Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.

  3. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  4. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    Science.gov (United States)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  5. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Methods and tools for analysis and optimization of power plants

    Energy Technology Data Exchange (ETDEWEB)

    Assadi, Mohsen

    2000-09-01

    The most noticeable advantage of the introduction of the computer-aided tools in the field of power generation, has been the ability to study the plant's performance prior to the construction phase. The results of these studies have made it possible to change and adjust the plant layout to match the pre-defined requirements. Further development of computers in recent years has opened up for implementation of new features in the existing tools and also for the development of new tools for specific applications, like thermodynamic and economic optimization, prediction of the remaining component life time, and fault diagnostics, resulting in improvement of the plant's performance, availability and reliability. The most common tools for pre-design studies are heat and mass balance programs. Further thermodynamic and economic optimization of plant layouts, generated by the heat and mass balance programs, can be accomplished by using pinch programs, exergy analysis and thermoeconomics. Surveillance and fault diagnostics of existing systems can be performed by using tools like condition monitoring systems and artificial neural networks. The increased number of tools and their various construction and application areas make the choice of the most adequate tool for a certain application difficult. In this thesis the development of different categories of tools and techniques, and their application area are reviewed and presented. Case studies on both existing and theoretical power plant layouts have been performed using different commercially available tools to illuminate their advantages and shortcomings. The development of power plant technology and the requirements for new tools and measurement systems have been briefly reviewed. This thesis contains also programming techniques and calculation methods concerning part-load calculations using local linearization, which has been implemented in an inhouse heat and mass balance program developed by the author

  7. Optimal sizing method for stand-alone photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Groumpos, P P; Papageorgiou, G

    1987-01-01

    The total life-cycle cost of stand-alone photovoltaic (SAPV) power systems is mathematically formulated. A new optimal sizing algorithm for the solar array and battery capacity is developed. The optimum value of a balancing parameter, M, for the optimal sizing of SAPV system components is derived. The proposed optimal sizing algorithm is used in an illustrative example, where a more economical life-cycle cost has bene obtained. The question of cost versus reliability is briefly discussed.

  8. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  9. An Improved Method for Reconfiguring and Optimizing Electrical Active Distribution Network Using Evolutionary Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Nur Faziera Napis

    2018-05-01

    Full Text Available The presence of optimized distributed generation (DG with suitable distribution network reconfiguration (DNR in the electrical distribution network has an advantage for voltage support, power losses reduction, deferment of new transmission line and distribution structure and system stability improvement. However, installation of a DG unit at non-optimal size with non-optimal DNR may lead to higher power losses, power quality problem, voltage instability and incremental of operational cost. Thus, an appropriate DG and DNR planning are essential and are considered as an objective of this research. An effective heuristic optimization technique named as improved evolutionary particle swarm optimization (IEPSO is proposed in this research. The objective function is formulated to minimize the total power losses (TPL and to improve the voltage stability index (VSI. The voltage stability index is determined for three load demand levels namely light load, nominal load, and heavy load with proper optimal DNR and DG sizing. The performance of the proposed technique is compared with other optimization techniques, namely particle swarm optimization (PSO and iteration particle swarm optimization (IPSO. Four case studies on IEEE 33-bus and IEEE 69-bus distribution systems have been conducted to validate the effectiveness of the proposed IEPSO. The optimization results show that, the best achievement is done by IEPSO technique with power losses reduction up to 79.26%, and 58.41% improvement in the voltage stability index. Moreover, IEPSO has the fastest computational time for all load conditions as compared to other algorithms.

  10. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  11. Determination of optimal whole body vibration amplitude and frequency parameters with plyometric exercise and its influence on closed-chain lower extremity acute power output and EMG activity in resistance trained males

    Science.gov (United States)

    Hughes, Nikki J.

    The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p plyometric exercise does not induce alterations in subsequent MP and PP output and EMGrms activity of the lower extremity. Future studies need to address the time of WBV exposure and magnitude of external loads that will maximize strength and/or power output.

  12. Optimal Allocation of Power-Electronic Interfaced Wind Turbines Using a Genetic Algorithm - Monte Carlo Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe

    2010-01-01

    determined by the wind resource and geographic conditions, the location of wind turbines in a power system network may significantly affect the distribution of power flow, power losses, etc. Furthermore, modern WTs with power-electronic interface have the capability of controlling reactive power output...... limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...... setting of WTs. The sequential MCS takes into account the stochastic behaviour of wind power generation and load. The proposed hybrid optimization method is demonstrated on an 11 kV 69-bus distribution system....

  13. Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods

    Directory of Open Access Journals (Sweden)

    Saadia Zahid

    2015-01-01

    Full Text Available Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs with artificial neural networks (ANNs. Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.

  14. Models and Methods for Structural Topology Optimization with Discrete Design Variables

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal shape and the topology of the structure. In some cases also the optimal material properties can be determined. Optimal structural design problems are modeled...... such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used in the conceptual design phase to find innovative designs. The strength of topology optimization is the capability of determining both the optimal......Structural topology optimization is a multi-disciplinary research field covering optimal design of load carrying mechanical structures such as bridges, airplanes, wind turbines, cars, etc. Topology optimization is a collection of theory, mathematical models, and numerical methods and is often used...

  15. A method of validating climate models in climate research with a view to extreme events; Eine Methode zur Validierung von Klimamodellen fuer die Klimawirkungsforschung hinsichtlich der Wiedergabe extremer Ereignisse

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, U

    2000-08-01

    A method is presented to validate climate models with respect to extreme events which are suitable for risk assessment in impact modeling. The algorithm is intended to complement conventional techniques. These procedures mainly compare simulation results with reference data based on single or only a few climatic variables at the same time under the aspect how well a model performs in reproducing the known physical processes of the atmosphere. Such investigations are often based on seasonal or annual mean values. For impact research, however, extreme climatic conditions with shorter typical time scales are generally more interesting. Furthermore, such extreme events are frequently characterized by combinations of individual extremes which require a multivariate approach. The validation method presented here basically consists of a combination of several well-known statistical techniques, completed by a newly developed diagnosis module to quantify model deficiencies. First of all, critical threshold values of key climatic variables for impact research have to be derived serving as criteria to define extreme conditions for a specific activity. Unlike in other techniques, the simulation results to be validated are interpolated to the reference data sampling points in the initial step of this new technique. Besides that fact that the same spatial representation is provided in this way in both data sets for the next diagnostic steps, this procedure also enables to leave the reference basis unchanged for any type of model output and to perform the validation on a real orography. To simultaneously identify the spatial characteristics of a given situation regarding all considered extreme value criteria, a multivariate cluster analysis method for pattern recognition is separately applied to both simulation results and reference data. Afterwards, various distribution-free statistical tests are applied depending on the specific situation to detect statistical significant

  16. A method of validating climate models in climate research with a view to extreme events; Eine Methode zur Validierung von Klimamodellen fuer die Klimawirkungsforschung hinsichtlich der Wiedergabe extremer Ereignisse

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, U.

    2000-08-01

    A method is presented to validate climate models with respect to extreme events which are suitable for risk assessment in impact modeling. The algorithm is intended to complement conventional techniques. These procedures mainly compare simulation results with reference data based on single or only a few climatic variables at the same time under the aspect how well a model performs in reproducing the known physical processes of the atmosphere. Such investigations are often based on seasonal or annual mean values. For impact research, however, extreme climatic conditions with shorter typical time scales are generally more interesting. Furthermore, such extreme events are frequently characterized by combinations of individual extremes which require a multivariate approach. The validation method presented here basically consists of a combination of several well-known statistical techniques, completed by a newly developed diagnosis module to quantify model deficiencies. First of all, critical threshold values of key climatic variables for impact research have to be derived serving as criteria to define extreme conditions for a specific activity. Unlike in other techniques, the simulation results to be validated are interpolated to the reference data sampling points in the initial step of this new technique. Besides that fact that the same spatial representation is provided in this way in both data sets for the next diagnostic steps, this procedure also enables to leave the reference basis unchanged for any type of model output and to perform the validation on a real orography. To simultaneously identify the spatial characteristics of a given situation regarding all considered extreme value criteria, a multivariate cluster analysis method for pattern recognition is separately applied to both simulation results and reference data. Afterwards, various distribution-free statistical tests are applied depending on the specific situation to detect statistical significant

  17. Oil Reservoir Production Optimization using Single Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Völcker, Carsten; Frydendall, Jan

    2012-01-01

    the injections and oil production such that flow is uniform in a given geological structure. Even in the case of conventional water flooding, feedback based optimal control technologies may enable higher oil recovery than with conventional operational strategies. The optimal control problems that must be solved...

  18. Application of Taguchi method for cutting force optimization in rock

    Indian Academy of Sciences (India)

    In this paper, an optimization study was carried out for the cutting force (Fc) acting on circular diamond sawblades in rock sawing. The peripheral speed, traverse speed, cut depth and flow rate of cooling fluid were considered as operating variables and optimized by using Taguchi approach for the Fc. L16(44) orthogonal ...

  19. The optimal analgesic method in saline infusion sonogram: A comparison of two effective techniques with placebo

    Directory of Open Access Journals (Sweden)

    Sadullah Özkan

    2016-09-01

    Full Text Available Objective: Operations performed with local anesthesia can sometimes be extremely painful and uncomfortable for patients. Our aim was to investigate the optimal analgesic method in saline infusion sonograms.\tMaterials and Methods: This study was performed in our Clinic of Obstetrics and Gynecology between March and August 2011. Ninety-six patients were included. Patients were randomly divided into groups that received saline (controls, group 1, paracervical block (group 2, or paracervical block + intrauterine lidocaine (group 3. In all groups, a visual analogue scale score was performed during the tenaculum placement, while saline was administered, and 30 minutes after the procedure.\tResults: When all the patients were evaluated, the difference in the visual analogue scale scores in premenopausal patients during tenaculum placement, during the saline infusion into the cavity, and 30 minutes following the saline infusion sonography were statistically different between the saline and paracervical block groups, and between the saline and paracervical block + intrauterine lidocaine group. However, there was no statistically significant difference between paracervical block and paracervical block + intrauterine lidocaine groups.\tConclusion: As a result of our study, paracervical block is a safe method to use in premenopausal patients to prevent pain during saline infusion sonography. The addition of intrauterine lidocaine to the paracervical block does not increase the analgesic effect; moreover, it increases the cost and time that the patient stays in the dorsolithotomy position by 3 minutes.

  20. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Preisler, Jan [Iowa State Univ., Ames, IA (United States)

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  1. Comparative analysis of methods for modelling the short-term probability distribution of extreme wind turbine loads

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov

    2016-01-01

    We have tested the performance of statistical extrapolation methods in predicting the extreme response of a multi-megawatt wind turbine generator. We have applied the peaks-over-threshold, block maxima and average conditional exceedance rates (ACER) methods for peaks extraction, combined with four...... levels, based on the assumption that the response tail is asymptotically Gumbel distributed. Example analyses were carried out, aimed at comparing the different methods, analysing the statistical uncertainties and identifying the factors, which are critical to the accuracy and reliability...

  2. An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.

    Science.gov (United States)

    Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P

    2012-01-01

    The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example

  3. On the equivalence of optimality criterion and sequential approximate optimization methods in the classical layout problem

    NARCIS (Netherlands)

    Groenwold, A.A.; Etman, L.F.P.

    2008-01-01

    We study the classical topology optimization problem, in which minimum compliance is sought, subject to linear constraints. Using a dual statement, we propose two separable and strictly convex subproblems for use in sequential approximate optimization (SAO) algorithms.Respectively, the subproblems

  4. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  5. Optimizing design parameter for light isotopes separation by distillation method

    International Nuclear Information System (INIS)

    Ahmadi, M.

    1999-01-01

    More than methods are suggested in the world for producing heavy water, where between them chemical isotopic methods, distillation and electro lys are used widely in industrial scale. To select suitable method for heavy water production in Iran, taking into consideration, domestic technology an facilities, combination of hydrogen sulphide-water dual temperature process (Gs) and distillation (D W) may be proposed. Natural water, is firstly enriched up to 15 a% by G S process and then by distillation unit is enriched up to the grade necessary for Candu type reactors (99.8 a%). The aim of present thesis, is to achieve know-how, optimization of design parameters, and executing basic design for water isotopes separation using distillation process in a plant having minimum scale possible. In distillation, vapour phase resulted from liquid phase heating, is evidently composed of the same constituents as liquid phase. In isotopic distillation, the difference in composition of constituents is not considerable. In fact alteration of constituents composition is so small that makes the separation process impossible, however, direct separation and production of pure products without further processing which becomes possible by distillation, makes this process as one of the most important separation processes. Profiting distillation process to produce heavy water is based on difference existing between boiling point of heavy and light water. The trends of boiling points differences (heavy and light water) is adversely dependant with pressure. As the whole system pressure decreases, difference in boiling points increases. On the other hand according to the definition, separation factor is equal to the ratio of pure light water vapour pressure to that of heavy water, or we can say that the trend of whole system pressure decrease results in separation factor increase, which accordingly separation factor equation to pressure variable should be computed firstly. According to the

  6. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  7. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  8. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    Science.gov (United States)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  9. Extreme flood estimation by the SCHADEX method in a snow-driven catchment: application to Atnasjø (Norway)

    Science.gov (United States)

    Paquet, Emmanuel; Lawrence, Deborah

    2013-04-01

    The SCHADEX method for extreme flood estimation was developed by Paquet et al. (2006, 2013), and since 2008, it is the reference method used by Electricité de France (EDF) for dam spillway design. SCHADEX is a so-called "semi-continuous" stochastic simulation method in that flood events are simulated on an event basis and are superimposed on a continuous simulation of the catchment saturation hazard usingrainfall-runoff modelling. The MORDOR hydrological model (Garçon, 1999) has thus far been used for the rainfall-runoff modelling. MORDOR is a conceptual, lumped, reservoir model with daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt, and routing. The model has been intensively used at EDF for more than 15 years, in particular for inflow forecasts for French mountainous catchments. SCHADEX has now also been applied to the Atnasjø catchment (463 km²), a well-documented inland catchment in south-central Norway, dominated by snowmelt flooding during spring/early summer. To support this application, a weather pattern classification based on extreme rainfall was first established for Norway (Fleig, 2012). This classification scheme was then used to build a Multi-Exponential Weather Pattern distribution (MEWP), as introduced by Garavaglia et al. (2010) for extreme rainfall estimation. The MORDOR model was then calibrated relative to daily discharge data for Atnasjø. Finally, a SCHADEX simulation was run to build a daily discharge distribution with a sufficient number of simulations for assessing the extreme quantiles. Detailed results are used to illustrate how SCHADEX handles the complex and interacting hydrological processes driving flood generation in this snow driven catchment. Seasonal and monthly distributions, as well as statistics for several thousand simulated events reaching a 1000 years return level

  10. Topology optimization of bounded acoustic problems using the hybrid finite element-wave based method

    DEFF Research Database (Denmark)

    Goo, Seongyeol; Wang, Semyung; Kook, Junghwan

    2017-01-01

    This paper presents an alternative topology optimization method for bounded acoustic problems that uses the hybrid finite element-wave based method (FE-WBM). The conventional method for the topology optimization of bounded acoustic problems is based on the finite element method (FEM), which...

  11. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  12. Iron Pole Shape Optimization of IPM Motors Using an Integrated Method

    Directory of Open Access Journals (Sweden)

    JABBARI, A.

    2010-02-01

    Full Text Available An iron pole shape optimization method to reduce cogging torque in Interior Permanent Magnet (IPM motors is developed by using the reduced basis technique coupled by finite element and design of experiments methods. Objective function is defined as the minimum cogging torque. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the rotor pole shape optimization of a 4-poles/24-slots IPM motor.

  13. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    Science.gov (United States)

    2015-06-24

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly

  14. Motor relearning program and Bobath method improve motor function of the upper extremities in patients with stroke

    Institute of Scientific and Technical Information of China (English)

    Jinjing Liu; Fengsheng Li; Guihua Liu

    2006-01-01

    BACKGROUND: In the natural evolution of cerebrovascular disease, unconscious use of affected extremity during drug treatment and daily life can improve the function of affected upper extremity partially, but it is very slow and alsc accompanied by the formation of abnormal mode. Therefore, functional training should be emphasized in recovering the motor function of extremity.OBJECTIVE: To observe the effects of combination of motor relearning program and Bobath method on motor function of upper extremity of patients with stroke.DESIGN: Comparison of therapeutic effects taking stroke patients as observation subjects.SETTING: Department of Neurology, General Hospital of Beijing Jingmei Group.PARTICIPANTS: Totally 120 stroke patients, including 60 males and 60 females, averaged (59±3) years, who hospitalized in the Department of Neurology, General Hospital of Beijing Jingmei Group between January 2005 and June 2006 were recruited. The involved patients met the following criteria: Stroke attack within 2 weeks;diagnosis criteria of cerebral hemorrhage or infarction made in the 4th National Cerebrovascular Disease Conference; confirmed by skull CT or MRI; Informed consents of therapeutic regimen were obtained. The patients were assigned into 2 groups according to their wills: rehabilitation group and control group, with 30 males and 30 females in each group. Patients in rehabilitation group averaged (59±2)years old, and those in the control group averaged (58±2)years old.METHODS: ① Patients in two groups received routine treatment in the Department of Neurology. When the vital signs of patients in the rehabilitation group were stable, individualized treatment was conducted by combined application of motor relearning program and Bobath method. Meanwhile, training of activity of daily living was performed according to the disease condition changes of patients at different phases, including the nursing and instruction of body posture, the maintenance of good extremity

  15. Development of Combinatorial Methods for Alloy Design and Optimization

    International Nuclear Information System (INIS)

    Pharr, George M.; George, Easo P.; Santella, Michael L

    2005-01-01

    The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for

  16. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  17. Thermal management optimization of an air-cooled hydrogen fuel cell system in an extreme environmental condition

    DEFF Research Database (Denmark)

    Gao, Xin; Olesen, Anders Christian; Kær, Søren Knudsen

    2018-01-01

    An air-cooled proton exchange membrane (PEM) fuel cell system is designed and under manufacture for telecommunication back-up power. To enhance its competence in various environments, the system thermal feature is optimized in this work via simulation based on a computational fluid dynamics (CFD......, the intake airflow magnitude, is also studied for a more uniform airflow and in turn a suppressed temperature disparity inside the system. Following the guidelines drawn by this work on the system design and the operation setting, the air-cooled fuel cell system can be expected with better performances......) model. The model is three-dimensional (3D) and built in the commercial CFD package Fluent (ANSYS Inc.). It makes the full-scale system-level study feasible by only considering the system essences with adequate accuracy. Through the model, the optimization is attained in several aspects. Firstly...

  18. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  19. Methods and apparatus for use with extreme ultraviolet light having contamination protection

    Science.gov (United States)

    Chilese, Francis C.; Torczynski, John R.; Garcia, Rudy; Klebanoff, Leonard E.; Delgado, Gildardo R.; Rader, Daniel J.; Geller, Anthony S.; Gallis, Michail A.

    2016-07-12

    An apparatus for use with extreme ultraviolet (EUV) light comprising A) a duct having a first end opening, a second end opening and an intermediate opening intermediate the first end opening the second end opening, B) an optical component disposed to receive EUV light from the second end opening or to send light through the second end opening, and C) a source of low pressure gas at a first pressure to flow through the duct, the gas having a high transmission of EUV light, fluidly coupled to the intermediate opening. In addition to or rather than gas flow the apparatus may have A) a low pressure gas with a heat control unit thermally coupled to at least one of the duct and the optical component and/or B) a voltage device to generate voltage between a first portion and a second portion of the duet with a grounded insulative portion therebetween.

  20. MATHEMATICAL METHODS FOR THE OPTIMIZATION OF THE AEOLIAN AND HYDRAULICS ENERGIES WITH APPLICATIONS IN HYDRO-AERODYNAMICS

    Directory of Open Access Journals (Sweden)

    Mircea LUPU

    2012-05-01

    Full Text Available The people’s life and activity in nature and society depends primary by air, water, light, climate, ground and by using the aeolian, hydraulic, mechanic and electrical energies, generated by the dynamics of these environments. The dynamics of these phenomena from the nature is linear and majority nonlinear, probabilistic – inducing a mathematical modeling – for the optimal control, with the equations with a big complexity. In the paper the author presents new mathematical models and methods in the optimization of these phenomena with technical applications: the optimization of the hydraulic, aeolian turbine’s blades or for the eliminating air pollutants and residual water purification; the actions hydropneumatics (robotics to balance the ship in roll stability, optimizing the sails (wind powered for extreme durability or propelling force, optimizing aircraft profiles for the drag or the lift forces, directing navigation, parachute brake, the wall, etc. The scientific results are accompanied by numerical calculations, integrating in the specialized literature from our country and foreign.

  1. Optimization of Classical Hydraulic Engine Mounts Based on RMS Method

    Directory of Open Access Journals (Sweden)

    J. Christopherson

    2005-01-01

    Full Text Available Based on RMS averaging of the frequency response functions of the absolute acceleration and relative displacement transmissibility, optimal parameters describing the hydraulic engine mount are determined to explain the internal mount geometry. More specifically, it is shown that a line of minima exists to define a relationship between the absolute acceleration and relative displacement transmissibility of a sprung mass using a hydraulic mount as a means of suspension. This line of minima is used to determine several optimal systems developed on the basis of different clearance requirements, hence different relative displacement requirements, and compare them by means of their respective acceleration and displacement transmissibility functions. In addition, the transient response of the mount to a step input is also investigated to show the effects of the optimization upon the time domain response of the hydraulic mount.

  2. Optimal design method for a digital human–computer interface based on human reliability in a nuclear power plant. Part 3: Optimization method for interface task layout

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Wang, Yiqun; Zhang, Li; Xie, Tian; Li, Min; Peng, Yuyuan; Wu, Daqing; Li, Peiyao; Ma, Congmin; Shen, Mengxu; Wu, Xing; Weng, Mengyun; Wang, Shiwei; Xie, Cen

    2016-01-01

    Highlights: • The authors present an optimization algorithm for interface task layout. • The performing process of the proposed algorithm was depicted. • The performance evaluation method adopted neural network method. • The optimization layouts of an event interface tasks were obtained by experiments. - Abstract: This is the last in a series of papers describing the optimal design for a digital human–computer interface of a nuclear power plant (NPP) from three different points based on human reliability. The purpose of this series is to propose different optimization methods from varying perspectives to decrease human factor events that arise from the defects of a human–computer interface. The present paper mainly solves the optimization method as to how to effectively layout interface tasks into different screens. The purpose of this paper is to decrease human errors by reducing the distance that an operator moves among different screens in each operation. In order to resolve the problem, the authors propose an optimization process of interface task layout for digital human–computer interface of a NPP. As to how to automatically layout each interface task into one of screens in each operation, the paper presents a shortest moving path optimization algorithm with dynamic flag based on human reliability. To test the algorithm performance, the evaluation method uses neural network based on human reliability. The less the human error probabilities are, the better the interface task layouts among different screens are. Thus, by analyzing the performance of each interface task layout, the optimization result is obtained. Finally, the optimization layouts of spurious safety injection event interface tasks of the NPP are obtained by an experiment, the proposed methods has a good accuracy and stabilization.

  3. A boolean optimization method for reloading a nuclear reactor

    International Nuclear Information System (INIS)

    Misse Nseke, Theophile.

    1982-04-01

    We attempt to solve the problem of optimal reloading of fuel assemblies in a PWR, without any assumption on the fuel nature. Any loading is marked by n 2 boolean variables usub(ij). The state of the reactor is characterized by his Ksub(eff) and the related power distribution. The resulting non-linear allocation problems are solved throught mathematical programming technics combining the simplex algorithm and an extension of the Balas-Geoffrion's one. Some optimal solutions are given for PWR with assemblies of different enrichment [fr

  4. Application of Numerical Optimization Methods to Perform Molecular Docking on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. A. Farkov

    2014-01-01

    Full Text Available An analysis of numerical optimization methods for solving a problem of molecular docking has been performed. Some additional requirements for optimization methods according to GPU architecture features were specified. A promising method for implementation on GPU was selected. Its implementation was described and performance and accuracy tests were performed.

  5. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    Science.gov (United States)

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Research on inverse methods and optimization in Italy

    Science.gov (United States)

    Larocca, Francesco

    1991-01-01

    The research activities in Italy on inverse design and optimization are reviewed. The review is focused on aerodynamic aspects in turbomachinery and wing section design. Inverse design of blade rows and ducts of turbomachinery in subsonic and transonic regime are illustrated by the Politecnico di Torino and turbomachinery industry (FIAT AVIO).

  7. A topology optimization method for design of negative permeability metamaterials

    DEFF Research Database (Denmark)

    Diaz, A. R.; Sigmund, Ole

    2010-01-01

    A methodology based on topology optimization for the design of metamaterials with negative permeability is presented. The formulation is based on the design of a thin layer of copper printed on a dielectric, rectangular plate of fixed dimensions. An effective media theory is used to estimate the ...

  8. Resampling: An optimization method for inverse planning in robotic radiosurgery

    International Nuclear Information System (INIS)

    Schweikard, Achim; Schlaefer, Alexander; Adler, John R. Jr.

    2006-01-01

    By design, the range of beam directions in conventional radiosurgery are constrained to an isocentric array. However, the recent introduction of robotic radiosurgery dramatically increases the flexibility of targeting, and as a consequence, beams need be neither coplanar nor isocentric. Such a nonisocentric design permits a large number of distinct beam directions to be used in one single treatment. These major technical differences provide an opportunity to improve upon the well-established principles for treatment planning used with GammaKnife or LINAC radiosurgery. With this objective in mind, our group has developed over the past decade an inverse planning tool for robotic radiosurgery. This system first computes a set of beam directions, and then during an optimization step, weights each individual beam. Optimization begins with a feasibility query, the answer to which is derived through linear programming. This approach offers the advantage of completeness and avoids local optima. Final beam selection is based on heuristics. In this report we present and evaluate a new strategy for utilizing the advantages of linear programming to improve beam selection. Starting from an initial solution, a heuristically determined set of beams is added to the optimization problem, while beams with zero weight are removed. This process is repeated to sample a set of beams much larger compared with typical optimization. Experimental results indicate that the planning approach efficiently finds acceptable plans and that resampling can further improve its efficiency

  9. Use of Simplex Method in Determination of Optimal Rational ...

    African Journals Online (AJOL)

    The optimal rational composition was found to be: Nsu Clay = 47.8%, quartz = 33.7% and CaCO3 = 18.5%. The other clay from Ukpor was found unsuitable at the firing temperature (l000°C) used. It showed bending strength lower than the standard requirement for all compositions studied. To improve the strength an ...

  10. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Directory of Open Access Journals (Sweden)

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  11. Extreme Ultraviolet Process Optimization for Contact Layer of 14 nm Node Logic and 16 nm Half Pitch Memory Devices

    Science.gov (United States)

    Tseng, Shih-En; Chen, Alek

    2012-06-01

    Extreme ultraviolet (EUV) lithography is considered the most promising single exposure technology at the 27 nm half-pitch node and beyond. The imaging performance of ASML TWINSCAN NXE:3100 has been demonstrated to be able to resolve 26 nm Flash gate layer and 16 nm static random access memory (SRAM) metal layer with a 0.25 numerical aperture (NA) and conventional illumination. Targeting for high volume manufacturing, ASML TWINSCAN NXE:3300B, featuring a 0.33 NA lens with off-axis illumination, will generate a higher contrast aerial image due to improved diffraction order collection efficiency and is expected to reduce target dose via mask biasing. This work performed a simulation to determine how EUV high NA imaging benefits the mask rule check trade-offs required to achieve viable lithography solutions in two device application scenarios: a 14 nm node 6T-SRAM contact layer and a 16 nm half-pitch NAND Flash staggered contact layer. In each application, the three-dimensional mask effects versus Kirchhoff mask were also investigated.

  12. Inpatient weight loss as a precursor to bariatric surgery for adolescents with extreme obesity: optimizing bariatric surgery.

    Science.gov (United States)

    Koeck, Emily; Davenport, Katherine; Barefoot, Leah C; Qureshi, Faisal G; Davidow, Daniel; Nadler, Evan P

    2013-07-01

    As the obesity epidemic takes its toll on patients stricken with the disease and our health care system, debate continues regarding the use of weight loss surgery and its long-term consequences, especially for adolescents. One subset of patients regarding whom there is increased controversy is adolescents with extreme obesity (BMI > 60 kg/m(2)) because the risk of complications in this weight category is higher than for others undergoing bariatric surgery. Several strategies have been suggested for this patient group, including staged operations, combined operations, intragastric balloon use, and endoluminal sleeve placement. However, the device options are often not available to adolescents, and there are no data regarding staged or combined procedures in this age group. All adolescents with BMI >60 kg/m(2) referred to our program were evaluated for inpatient medical weight loss prior to laparoscopic sleeve gastrectomy. The program utilizes a multidisciplinary approach with a protein-sparing modified fast diet, exercise, and behavioral modification. Three patients completed the program, and each achieved significant preoperative weight loss through the inpatient program and successfully underwent bariatric surgery. Presurgical weight loss via an inpatient program for adolescents with a BMI >60 kg/m(2) results in total weight loss comparable to a primary surgical procedure alone, with the benefit of decreasing the perioperative risk.

  13. A novel heuristic method for optimization of straight blade vertical axis wind turbine

    International Nuclear Information System (INIS)

    Tahani, Mojtaba; Babayan, Narek; Mehrnia, Seyedmajid; Shadmehri, Mehran

    2016-01-01

    Highlights: • A novel heuristic method has been proposed for optimization of VAWTs. • The proposed method is the combination of DMST model with heuristic algorithms. • A continuous/discrete optimization problem has been solved. • A novel continuous optimization algorithm has been developed. • The CFD simulation of the optimized geometry has been carried out. - Abstract: In this research study it is aimed to propose a novel heuristic method for optimizing the VAWT design. The method is the combination of continuous/discrete optimization algorithms with double multiple stream tube (DMST) theory. For this purpose a DMST code has been developed and is validated using available experimental data in literature. A novel continuous optimization algorithm is proposed which can be considered as the combination of three heuristic optimization algorithms namely elephant herding optimization, flower pollination algorithm and grey wolf optimizer. The continuous algorithm is combined with popular discrete ant colony optimization algorithm (ACO). The proposed method can be utilized for several engineering problems which are dealing with continuous and discrete variables. In this research study, chord and diameter of the turbine are selected as continuous decision variables and airfoil types and number of blades are selected as discrete decision variables. The average power coefficient between tip speed rations from 1.5 to 9.5 is considered as the objective function. The optimization results indicated that the optimized geometry can produce a maximum power coefficient, 44% higher than the maximum power coefficient of the original turbine. Also a CFD simulation of the optimized geometry is carried out. The CFD results indicated that the average vorticity magnitude around the optimized blade is larger than the original blade and this results greater momentum and power coefficient.

  14. [Study of CT Automatic Exposure Control System (CT-AEC) Optimization in CT Angiography of Lower Extremity Artery by Considering Contrast-to-Noise Ratio].

    Science.gov (United States)

    Inada, Satoshi; Masuda, Takanori; Maruyama, Naoya; Yamashita, Yukari; Sato, Tomoyasu; Imada, Naoyuki

    2016-01-01

    To evaluate the image quality and effect of radiation dose reduction by setting for computed tomography automatic exposure control system (CT-AEC) in computed tomographic angiography (CTA) of lower extremity artery. Two methods of setting were compared for CT-AEC [conventional and contrast-to-noise ratio (CNR) methods]. Conventional method was set noise index (NI): 14and tube current threshold: 10-750 mA. CNR method was set NI: 18, minimum tube current: (X+Y)/2 mA (X, Y: maximum X (Y)-axis tube current value of leg in NI: 14), and maximum tube current: 750 mA. The image quality was evaluated by CNR, and radiation dose reduction was evaluated by dose-length-product (DLP). In conventional method, mean CNRs for pelvis, femur, and leg were 19.9±4.8, 20.4±5.4, and 16.2±4.3, respectively. There was a significant difference between the CNRs of pelvis and leg (P<0.001), and between femur and leg (P<0.001). In CNR method, mean CNRs for pelvis, femur, and leg were 15.2±3.3, 15.3±3.2, and 15.3±3.1, respectively; no significant difference between pelvis, femur, and leg (P=0.973) in CNR method was observed. Mean DLPs were 1457±434 mGy⋅cm in conventional method, and 1049±434 mGy·cm in CNR method. There was a significant difference in the DLPs of conventional method and CNR method (P<0.001). CNR method gave equal CNRs for pelvis, femur, and leg, and was beneficial for radiation dose reduction in CTA of lower extremity artery.

  15. ASSESSMENT OF ATMOSPHERIC CORRECTION METHODS FOR OPTIMIZING HAZY SATELLITE IMAGERIES

    Directory of Open Access Journals (Sweden)

    Umara Firman Rizidansyah

    2015-04-01

    Full Text Available The purpose of this research is to examine suitability of three types of haze correction methods toward distinctness of surface objects in land cover. Considering the formation of haze therefore the main research are divided into both region namely rural assumed as vegetation and urban assumed as non vegetation area. Region of interest for rural selected Balaraja and urban selected Penjaringan. Haze imagery reduction utilized techniques such as Dark Object Substration, Virtual Cloud Point and Histogram Match. By applying an equation of Haze Optimized Transformation HOT = DNbluesin(∂-DNredcos(∂, the main result of this research includes: in the case of AVNIR-Rural, VCP has good results on Band 1 while the HM has good results on band 2, 3 and 4, therefore in the case of Avnir-Rural can be applied to HM. in the case of AVNIR-Urban, DOS has good result on band 1, 2 and 3 meanwhile HM has good results on band 4, therefore in the case of AVNIR-Urban can be applied to DOS. In the case of Landsat-Rural, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 4 and 5 and the smallest average value of HOT is 106.547 by VCP, therefore in the case of Lansat-Rural can be applied to DOS and VCP. In the case of Landsat-Urban, DOS has good result on band 1, 2 and 6 meanwhile VCP has good results on band 3, 4 and 5, therefore in the case of Landsat-Urban can be applied to VCP.   Tujuan penelitian ini untuk menguji kesesuaian tiga jenis metode koreksi haze terhadap kejelasan obyek permukaan di wilayah tutupan vegetasi dan non vegetasi, berkenaan menghilangkan haze di wilayah citra satelit optis yang memiliki karakteristik tertentu dan diduga proses pembentukan partikel hazenya berbeda. Sehingga daerah penelitian dibagi menjadi wilayah rural yang diasumsikan sebagai daerah vegetasi dan urban sebagai non vegetasi. Pedesaan terpilih kecamatan Balaraja dan Perkotaan terpilih kecamatan Penjaringan. Tiap lokasi menggunakan Avnir-2 dan Landsat

  16. Autonomous guided vehicles methods and models for optimal path planning

    CERN Document Server

    Fazlollahtabar, Hamed

    2015-01-01

      This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with A...

  17. The Practice of Physical Activity in the Setting of Lower-Extremities Sarcomas: A First Step toward Clinical Optimization

    Directory of Open Access Journals (Sweden)

    Mohamad Assi

    2017-10-01

    Full Text Available Lower-extremities sarcoma patients, with bone tumor and soft-tissue sarcoma, are a unique population at high risk of physical dysfunction and chronic heart diseases. Thus, providing an adequate physical activity (PA program constitutes a primary part of the adjuvant treatment, aiming to improve patients' quality of life. The main goal of this paper is to offer clear suggestions for clinicians regarding PA around the time between diagnosis and offered treatments. These preliminary recommendations reflect our interpretation of the clinical and preclinical data published on this topic, after a systematic search on the PubMed database. Accordingly, patients could be advised to (1 start sessions of supportive rehabilitation and low-intensity PA after surgery and (2 increase PA intensities progressively during home stay. The usefulness of PA during the preoperative period remains largely unknown but emerging preclinical data on mice bearing intramuscular sarcoma are most likely discouraging. However, efforts are still needed to in-depth elucidate the impact of PA before surgery completion. PA should be age-, sex-, and treatment-adapted, as young/adolescent, women and patients receiving platinum-based chemotherapy are more susceptible to physical quality deterioration. Concerning PA intensity, the practice of moderate-intensity resistance and endurance exercises (30–60 min/day are safe after surgery, even when receiving adjuvant chemo/radiotherapy. The general PA recommendations for cancer patients, 150 min/week of combined moderate-intensity endurance/resistance exercises, could be feasible after 18–24 months of rehabilitation. We believe that these suggestions will help clinicians to design a low-risk and useful PA program.

  18. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2015-01-01

    Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  19. Blade pitch optimization methods for vertical-axis wind turbines

    Science.gov (United States)

    Kozak, Peter

    Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.

  20. Numerical solution of the state-delayed optimal control problems by a fast and accurate finite difference θ-method

    Science.gov (United States)

    Hajipour, Mojtaba; Jajarmi, Amin

    2018-02-01

    Using the Pontryagin's maximum principle for a time-delayed optimal control problem results in a system of coupled two-point boundary-value problems (BVPs) involving both time-advance and time-delay arguments. The analytical solution of this advance-delay two-point BVP is extremely difficult, if not impossible. This paper provides a discrete general form of the numerical solution for the derived advance-delay system by applying a finite difference θ-method. This method is also implemented for the infinite-time horizon time-delayed optimal control problems by using a piecewise version of the θ-method. A matrix formulation and the error analysis of the suggested technique are provided. The new scheme is accurate, fast and very effective for the optimal control of linear and nonlinear time-delay systems. Various types of finite- and infinite-time horizon problems are included to demonstrate the accuracy, validity and applicability of the new technique.

  1. Adjoint-based Mesh Optimization Method: The Development and Application for Nuclear Fuel Analysis

    International Nuclear Information System (INIS)

    Son, Seongmin; Lee, Jeong Ik

    2016-01-01

    In this research, methods for optimizing mesh distribution is proposed. The proposed method uses adjoint base optimization method (adjoint method). The optimized result will be obtained by applying this meshing technique to the existing code input deck and will be compared to the results produced from the uniform meshing method. Numerical solutions are calculated form an in-house 1D Finite Difference Method code while neglecting the axial conduction. The fuel radial node optimization was first performed to match the Fuel Centerline Temperature (FCT) the best. This was followed by optimizing the axial node which the Peak Cladding Temperature (PCT) is matched the best. After obtaining the optimized radial and axial nodes, the nodalization is implemented into the system analysis code and transient analyses were performed to observe the optimum nodalization performance. The developed adjoint-based mesh optimization method in the study is applied to MARS-KS, which is a nuclear system analysis code. Results show that the newly established method yields better results than that of the uniform meshing method from the numerical point of view. It is again stressed that the optimized mesh for the steady state can also give better numerical results even during a transient analysis

  2. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  3. An Optimal Power Flow (OPF) Method with Improved Power System Stability

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    This paper proposes an optimal power flow (OPF) method taking into account small signal stability as additional constraints. Particle swarm optimization (PSO) algorithm is adopted to realize the OPF process. The method is programmed in MATLAB and implemented to a nine-bus test power system which...... has large-scale wind power integration. The results show the ability of the proposed method to find optimal (or near-optimal) operating points in different cases. Based on these results, the analysis of the impacts of wind power integration on the system small signal stability has been conducted....

  4. Dual-Energy Computed Tomography Angiography of the Lower Extremity Runoff: Impact of Noise-Optimized Virtual Monochromatic Imaging on Image Quality and Diagnostic Accuracy.

    Science.gov (United States)

    Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph

    2016-02-01

    The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.

  5. Bionic optimization in structural design stochastically based methods to improve the performance of parts and assemblies

    CERN Document Server

    Gekeler, Simon

    2016-01-01

    The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware. Bionic Optimization means finding the best solution to a problem using methods found in nature. As Evolutionary Strategies and Particle Swarm Optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them. A set of sample applications shows how Bionic Optimization works in practice. From academic studies on simple fra...

  6. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  7. A Visualization Technique for Accessing Solution Pool in Interactive Methods of Multiobjective Optimization

    OpenAIRE

    Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga

    2015-01-01

    Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...

  8. Observation of the lymph flow in the lower extremities of edematous patients with noninvasive methods

    International Nuclear Information System (INIS)

    Arai, Isao; Hirota, Akio; Watanabe, Sumio

    1983-01-01

    An RI-lymphography with a computer onlined gamma camera was used for observing the lymph flow of edematous patients without any invasive procedures and for estimating the active movement of lymph vessels. Subjects were composed of 8 normal volunteers (group 1), 41 non-edematous patients (group 2) and 26 edematous patients (group 3). Four mCi of Tc-99m-HSA in a volume of 0.1 ml was injected subcutaneously in the pretibial region of the lower extremity, and immediately after the injection scintigram was recorded on the thigh every 5 sec. for 30 min. Results: 1) Normal volunteers; Time-activity curves showed a gradual increase in RI activity in relation to time without remarkable spike-like fluctuations. The maximum count attained was less than 200 cps in all experiments. 2) Non-edematous patients; In 46 out of 57 experiments (80.8%), the similar time-activity curves were observed as those of the normal volunteers. On the other hand, time-activity curves in 11 out of 57 (19.2%) showed a much steeper stepwise-increase simultaneously with remarkable spike-waves. The maximum count was over 200 cps in these cases. 3) Edematous patients; In 12 out of 35 experiments (34.3%), the maximum count was over 200 cps. In these edematous diseases other than lymphedema and hyperthyroidism, time-activity curves showed a rapid stepwise increase with a lot of spikes, and the maximum count was over 500 cps in 6 experiments. In 23 out of 35 (65.7%), the maximum count was less than 200 cps. In these cases, edema was attributable to secondary lymphedema, hypothyroidism, aging and so on. 4) Relationship between edema and lymph flow: When subjects were divided into 3 groups (non-edema, mild and severe edema), the maximum count 200 cps was observed in 16.7% in non-edema group, 45.8% in mild and 9.1% in severe edema group

  9. Using multimodal imaging techniques to monitor limb ischemia: a rapid noninvasive method for assessing extremity wounds

    Science.gov (United States)

    Luthra, Rajiv; Caruso, Joseph D.; Radowsky, Jason S.; Rodriguez, Maricela; Forsberg, Jonathan; Elster, Eric A.; Crane, Nicole J.

    2013-03-01

    Over 70% of military casualties resulting from the current conflicts sustain major extremity injuries. Of these the majority are caused by blasts from improvised explosive devices. The resulting injuries include traumatic amputations, open fractures, crush injuries, and acute vascular disruption. Critical tissue ischemia—the point at which ischemic tissues lose the capacity to recover—is therefore a major concern, as lack of blood flow to tissues rapidly leads to tissue deoxygenation and necrosis. If left undetected or unaddressed, a potentially salvageable limb may require more extensive debridement or, more commonly, amputation. Predicting wound outcome during the initial management of blast wounds remains a significant challenge, as wounds continue to "evolve" during the debridement process and our ability to assess wound viability remains subjectively based. Better means of identifying critical ischemia are needed. We developed a swine limb ischemia model in which two imaging modalities were combined to produce an objective and quantitative assessment of wound perfusion and tissue viability. By using 3 Charge-Coupled Device (3CCD) and Infrared (IR) cameras, both surface tissue oxygenation as well as overall limb perfusion could be depicted. We observed a change in mean 3CCD and IR values at peak ischemia and during reperfusion correlate well with clinically observed indicators for limb function and vitality. After correcting for baseline mean R-B values, the 3CCD values correlate with surface tissue oxygenation and the IR values with changes in perfusion. This study aims to not only increase fundamental understanding of the processes involved with limb ischemia and reperfusion, but also to develop tools to monitor overall limb perfusion and tissue oxygenation in a clinical setting. A rapid and objective diagnostic for extent of ischemic damage and overall limb viability could provide surgeons with a more accurate indication of tissue viability. This may

  10. A novel optimized LCL-filter designing method for grid connected converter

    DEFF Research Database (Denmark)

    Guohong, Zeng; Rasmussen, Tonny Wederberg; Teodorescu, Remus

    2010-01-01

    This paper presents a new LCL-filters optimized designing method for grid connected voltage source converter. This method is based on the analysis of converter output voltage components and inherent relations among LCL-filter parameters. By introducing an optimizing index of equivalent total capa...

  11. Adjoint Parameter Sensitivity Analysis for the Hydrodynamic Lattice Boltzmann Method with Applications to Design Optimization

    DEFF Research Database (Denmark)

    Pingen, Georg; Evgrafov, Anton; Maute, Kurt

    2009-01-01

    We present an adjoint parameter sensitivity analysis formulation and solution strategy for the lattice Boltzmann method (LBM). The focus is on design optimization applications, in particular topology optimization. The lattice Boltzmann method is briefly described with an in-depth discussion...

  12. Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre

    DEFF Research Database (Denmark)

    Steensgaard, Randi; Dahl Hoffmann, Dorte

    “Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim......“Useful Method To Optimize The Rehabilitation Effort At A SCI Rehabilitation Centre” The Nordic Spinal Cord Society (NoSCoS) Meeting, Trondheim...

  13. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu [Texas Tech University (United States); Jablonowski, Christopher [Shell Exploration and Production Company (United States); Lake, Larry [University of Texas at Austin (United States)

    2017-04-15

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  14. Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs

    International Nuclear Information System (INIS)

    Ettehadtavakkol, Amin; Jablonowski, Christopher; Lake, Larry

    2017-01-01

    Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum design concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.

  15. Trip optimization system and method for a train

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Ajith Kuttannair; Shaffer, Glenn Robert; Houpt, Paul Kenneth; Movsichoff, Bernardo Adrian; Chan, David So Keung

    2017-08-15

    A system for operating a train having one or more locomotive consists with each locomotive consist comprising one or more locomotives, the system including a locator element to determine a location of the train, a track characterization element to provide information about a track, a sensor for measuring an operating condition of the locomotive consist, a processor operable to receive information from the locator element, the track characterizing element, and the sensor, and an algorithm embodied within the processor having access to the information to create a trip plan that optimizes performance of the locomotive consist in accordance with one or more operational criteria for the train.

  16. Radioimmunoassay (RIA), a highly specific, extremely sensitive quantitative method of analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strecker, H; Hachmann, H; Seidel, L [Farbwerke Hoechst A.G., Frankfurt am Main (Germany, F.R.). Radiochemisches Lab.

    1979-02-01

    Radioimmunoassay is an analytical method combining the sensitivity of radioactivity measurements and the specificity of the antigen-antibody-reaction. Thus, substances can be measured in concentrations as low as picograms per milliliter serum besides a millionfold excess of otherwise disturbing material (for example in serum). The method is simple to perform and is at present mainly used in the field of endocrinology. Further areas of possible application are in the diagnosis of infectious disease, drug research, environmental protection, forensic medicine as well as general analytics. Quantities of radioactivity, exclusively used in vitro, are in the nano-Curie range. Therefore the radiation dose is negligible.

  17. Improved Taguchi method based contract capacity optimization for industrial consumer with self-owned generating units

    International Nuclear Information System (INIS)

    Yang, Hong-Tzer; Peng, Pai-Chun

    2012-01-01

    Highlights: ► We propose an improved Taguchi method to determine the optimal contract capacities with SOGUs. ► We solve the highly discrete and nonlinear optimization problem for the contract capacities with SOGUs. ► The proposed improved Taguchi method integrates PSO in Taguchi method. ► The customer using the proposed optimization approach may save up to 12.18% of power expenses. ► The improved Taguchi method can also be well applied to the other similar problems. - Abstract: Contract capacity setting for industrial consumer with self-owned generating units (SOGUs) is a highly discrete and nonlinear optimization problem considering expenditure on the electricity from the utility and operation costs of the SOGUs. This paper proposes an improved Taguchi method that combines existing Taguchi method and particle swarm optimization (PSO) algorithm to solve this problem. Taguchi method provides fast converging characteristics in searching the optimal solution through quality analysis in orthogonal matrices. The integrated PSO algorithm generates new solutions in the orthogonal matrices based on the searching experiences during the evolution process to further improve the quality of solution. To verify feasibility of the proposed method, the paper uses the real data obtained from a large optoelectronics factory in Taiwan. In comparison with the existing optimization methods, the proposed improved Taguchi method has superior performance as revealed in the numerical results in terms of the convergence process and the quality of solution obtained.

  18. A primal-dual interior point method for large-scale free material optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  19. MULTI-CRITERIA PROGRAMMING METHODS AND PRODUCTION PLAN OPTIMIZATION PROBLEM SOLVING IN METAL INDUSTRY

    OpenAIRE

    Tunjo Perić; Željko Mandić

    2017-01-01

    This paper presents the production plan optimization in the metal industry considered as a multi-criteria programming problem. We first provided the definition of the multi-criteria programming problem and classification of the multicriteria programming methods. Then we applied two multi-criteria programming methods (the STEM method and the PROMETHEE method) in solving a problem of multi-criteria optimization production plan in a company from the metal industry. The obtained resul...

  20. Efficiency of operation of wind turbine rotors optimized by the Glauert and Betz methods

    DEFF Research Database (Denmark)

    Okulov, Valery; Mikkelsen, Robert Flemming; Litvinov, I. V.

    2015-01-01

    The models of two types of rotors with blades constructed using different optimization methods are compared experimentally. In the first case, the Glauert optimization by the pulsed method is used, which is applied independently for each individual blade cross section. This method remains the main...... time as a result of direct experimental comparison that the rotor constructed using the Betz method makes it possible to extract more kinetic energy from the homogeneous incoming flow....

  1. Detecting anthropogenic climate change with an optimal fingerprint method

    International Nuclear Information System (INIS)

    Hegerl, G.C.; Storch, H. von; Hasselmann, K.; Santer, B.D.; Jones, P.D.

    1994-01-01

    We propose a general fingerprint strategy to detect anthropogenic climate change and present application to near surface temperature trends. An expected time-space-variable pattern of anthropogenic climate change (the 'signal') is identified through application of an appropriate optimally matched space-time filter (the 'fingerprint') to the observations. The signal and the fingerprint are represented in a space with sufficient observed and simulated data. The signal pattern is derived from a model-generated prediction of anthropogenic climate change. Application of the fingerprint filter to the data yields a scalar detection variable. The statistically optimal fingerprint is obtained by weighting the model-predicted pattern towards low-noise directions. A combination of model output and observations is used to estimate the noise characteristics of the detection variable, arising from the natural variability of climate in the absence of external forcing. We test then the null hypothesis that the observed climate change is part of natural climate variability. We conclude that a statistically significant externally induced warming has been observed, with the caveat of a possibly inadequate estimate of the internal climate variability. In order to attribute this warming uniquely to anthropogenic greenhouse gas forcing, more information on the climate's response to other forcing mechanisms (e.g. changes in solar radiation, volcanic or anthropogenic aerosols) and their interaction is needed. (orig./KW)

  2. METHOD FOR OPTIMAL RESOLUTION OF MULTI-AIRCRAFT CONFLICTS IN THREE-DIMENSIONAL SPACE

    Directory of Open Access Journals (Sweden)

    Denys Vasyliev

    2017-03-01

    Full Text Available Purpose: The risk of critical proximities of several aircraft and appearance of multi-aircraft conflicts increases under current conditions of high dynamics and density of air traffic. The actual problem is a development of methods for optimal multi-aircraft conflicts resolution that should provide the synthesis of conflict-free trajectories in three-dimensional space. Methods: The method for optimal resolution of multi-aircraft conflicts using heading, speed and altitude change maneuvers has been developed. Optimality criteria are flight regularity, flight economy and the complexity of maneuvering. Method provides the sequential synthesis of the Pareto-optimal set of combinations of conflict-free flight trajectories using multi-objective dynamic programming and selection of optimal combination using the convolution of optimality criteria. Within described method the following are defined: the procedure for determination of combinations of aircraft conflict-free states that define the combinations of Pareto-optimal trajectories; the limitations on discretization of conflict resolution process for ensuring the absence of unobservable separation violations. Results: The analysis of the proposed method is performed using computer simulation which results show that synthesized combination of conflict-free trajectories ensures the multi-aircraft conflict avoidance and complies with defined optimality criteria. Discussion: Proposed method can be used for development of new automated air traffic control systems, airborne collision avoidance systems, intelligent air traffic control simulators and for research activities.

  3. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  4. Optimization on Measurement Method for Neutron Moisture Meter

    International Nuclear Information System (INIS)

    Gong Yalin; Wu Zhiqiang; Li Yanfeng; Wang Wei; Song Qingfeng; Liu Hui; Wei Xiaoyun; Zhao Zhonghua

    2010-01-01

    When the water in the measured material is nonuniformity, the measured results of the neutron moisture meter in the field may have errors, so the measured errors of the moisture meter associated with the water nonuniformity in material were simulated by Monte Carlo method. A new measurement method of moisture meter named 'transmission plus scatter' was put forward. The experiment results show that the new measurement method can reduce the error even if the water in the material is nonuniformity. (authors)

  5. Scintigraphic method for evaluating reductions in local blood volumes in human extremities

    DEFF Research Database (Denmark)

    Blønd, L; Madsen, Jan Lysgård

    2000-01-01

    were carried out. No significant differences between results obtained by the use of one or two scintigraphic projections were found. The between-subject coefficient of variation was 14% in the lower limb experiment and 11% in the upper limb experiment. The within-subject coefficient of variation was 6......% in the lower limb experiment and 6% in the upper limb experiment. We found a significant relation (r = 0.42, p = 0.018) between the results obtained by the scintigraphic method and the plethysmographic method. In fractions, a mean reduction in blood volume of 0.49+0.14 (2 SD) was found after 1 min of elevation...... of the lower limb and a mean reduction of 0.45+/-0.10 (2 SD) after half a minute of elevation of the upper limb. We conclude that the method is precise and can be used in investigating physiologic and pathophysiologic mechanisms in relation to blood volumes of limbs not subject to research previously....

  6. A fast method for optimal reactive power flow solution

    Energy Technology Data Exchange (ETDEWEB)

    Sadasivam, G; Khan, M A [Anna Univ., Madras (IN). Coll. of Engineering

    1990-01-01

    A fast successive linear programming (SLP) method for minimizing transmission losses and improving the voltage profile is proposed. The method uses the same compactly stored, factorized constant matrices in all the LP steps, both for power flow solution and for constructing the LP model. The inherent oscillatory convergence of SLP methods is overcome by proper selection of initial step sizes and their gradual reduction. Detailed studies on three systems, including a 109-bus system, reveal the fast and reliable convergence property of the method. (author).

  7. Method validation in pharmaceutical analysis: from theory to practical optimization

    Directory of Open Access Journals (Sweden)

    Jaqueline Kaleian Eserian

    2015-01-01

    Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis.   Type: Commentary

  8. An Intelligent Optimization Method for Vortex-Induced Vibration Reducing and Performance Improving in a Large Francis Turbine

    Directory of Open Access Journals (Sweden)

    Xuanlin Peng

    2017-11-01

    Full Text Available In this paper, a new methodology is proposed to reduce the vortex-induced vibration (VIV and improve the performance of the stay vane in a 200-MW Francis turbine. The process can be divided into two parts. Firstly, a diagnosis method for stay vane vibration based on field experiments and a finite element method (FEM is presented. It is found that the resonance between the Kármán vortex and the stay vane is the main cause for the undesired vibration. Then, we focus on establishing an intelligent optimization model of the stay vane’s trailing edge profile. To this end, an approach combining factorial experiments, extreme learning machine (ELM and particle swarm optimization (PSO is implemented. Three kinds of improved profiles of the stay vane are proposed and compared. Finally, the profile with a Donaldson trailing edge is adopted as the best solution for the stay vane, and verifications such as computational fluid dynamics (CFD simulations, structural analysis and fatigue analysis are performed to validate the optimized geometry.

  9. Cloud Particles Differential Evolution Algorithm: A Novel Optimization Method for Global Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Wei Li

    2015-01-01

    Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.

  10. Structural optimization of Au–Pd bimetallic nanoparticles with improved particle swarm optimization method

    International Nuclear Information System (INIS)

    Shao Gui-Fang; Zhu Meng; Shangguan Ya-Li; Li Wen-Ran; Zhang Can; Wang Wei-Wei; Li Ling

    2017-01-01

    Due to the dependence of the chemical and physical properties of the bimetallic nanoparticles (NPs) on their structures, a fundamental understanding of their structural characteristics is crucial for their syntheses and wide applications. In this article, a systematical atomic-level investigation of Au–Pd bimetallic NPs is conducted by using the improved particle swarm optimization (IPSO) with quantum correction Sutton–Chen potentials (Q-SC) at different Au/Pd ratios and different sizes. In the IPSO, the simulated annealing is introduced into the classical particle swarm optimization (PSO) to improve the effectiveness and reliability. In addition, the influences of initial structure, particle size and composition on structural stability and structural features are also studied. The simulation results reveal that the initial structures have little effects on the stable structures, but influence the converging rate greatly, and the convergence rate of the mixing initial structure is clearly faster than those of the core-shell and phase structures. We find that the Au–Pd NPs prefer the structures with Au-rich in the outer layers while Pd-rich in the inner ones. Especially, when the Au/Pd ratio is 6:4, the structure of the nanoparticle (NP) presents a standardized Pd core Au shell structure. (paper)

  11. Optimization of Production Processes Using the Yamazumi Method

    Directory of Open Access Journals (Sweden)

    Dušan Sabadka

    2017-12-01

    Full Text Available Manufacturing companies are now placing great emphasis on competitiveness and looking for ways to explore their resources more efficiently. This paper presents optimum efficiency improvement of the automotive transmission assembly production line by using line balancing. To optimize has been selected 3 assembly stations where is waste and where requirements are not met for achieving the production capacity. Several measures have been proposed on the assembly lines concerned to reduce operations by using eliminating unnecessary activities of the assembly processes, reducing the cycle time, and balancing manpower workload using line balancing through Yamazumi chart and Takt time. The results of the proposed measures were compared with the current situation in terms of increasing the efficiency of the production line.

  12. Optimization method for an evolutional type inverse heat conduction problem

    International Nuclear Information System (INIS)

    Deng Zuicha; Yu Jianning; Yang Liu

    2008-01-01

    This paper deals with the determination of a pair (q, u) in the heat conduction equation u t -u xx +q(x,t)u=0, with initial and boundary conditions u(x,0)=u 0 (x), u x vertical bar x=0 =u x vertical bar x=1 =0, from the overspecified data u(x, t) = g(x, t). By the time semi-discrete scheme, the problem is transformed into a sequence of inverse problems in which the unknown coefficients are purely space dependent. Based on the optimal control framework, the existence, uniqueness and stability of the solution (q, u) are proved. A necessary condition which is a couple system of a parabolic equation and parabolic variational inequality is deduced

  13. Optimization method for an evolutional type inverse heat conduction problem

    Science.gov (United States)

    Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu

    2008-01-01

    This paper deals with the determination of a pair (q, u) in the heat conduction equation u_t-u_{xx}+q(x,t)u=0, with initial and boundary conditions u(x,0)=u_0(x),\\qquad u_x|_{x=0}=u_x|_{x=1}=0, from the overspecified data u(x, t) = g(x, t). By the time semi-discrete scheme, the problem is transformed into a sequence of inverse problems in which the unknown coefficients are purely space dependent. Based on the optimal control framework, the existence, uniqueness and stability of the solution (q, u) are proved. A necessary condition which is a couple system of a parabolic equation and parabolic variational inequality is deduced.

  14. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Science.gov (United States)

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  15. Mathematical foundation of the optimization-based fluid animation method

    DEFF Research Database (Denmark)

    Erleben, Kenny; Misztal, Marek Krzysztof; Bærentzen, Jakob Andreas

    2011-01-01

    We present the mathematical foundation of a fluid animation method for unstructured meshes. Key contributions not previously treated are the extension to include diffusion forces and higher order terms of non-linear force approximations. In our discretization we apply a fractional step method to ...

  16. First-order Convex Optimization Methods for Signal and Image Processing

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm

    2012-01-01

    In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...

  17. Element stacking method for topology optimization with material-dependent boundary and loading conditions

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Park, Y.K.; Kim, Y.Y.

    2007-01-01

    A new topology optimization scheme, called the element stacking method, is developed to better handle design optimization involving material-dependent boundary conditions and selection of elements of different types. If these problems are solved by existing standard approaches, complicated finite...... element models or topology optimization reformulation may be necessary. The key idea of the proposed method is to stack multiple elements on the same discretization pixel and select a single or no element. In this method, stacked elements on the same pixel have the same coordinates but may have...... independent degrees of freedom. Some test problems are considered to check the effectiveness of the proposed stacking method....

  18. Achievement of extreme resolution for the selective by depth Moessbauer method on conversion electrons

    International Nuclear Information System (INIS)

    Babenkov, M.I.; Zhdanov, V.S.; Ryzhikh, V.Yu.; Chubisov, M.A.

    2001-01-01

    At the Institute of Nuclear Physics of the National Nuclear Center of the Republic of Kazakhstan the depth selective conversion electrons Moessbauer spectroscopy (DSCEMS) method was realized on the facility designed on the magnet sector beta-spectrometer base with the dual focusing equipped with non-equipotential electron source in the multi-ribbon variant and the position-sensitive detector. In the work the model statistical calculations of energy and angular distributions experienced not so many times of inelastic scattering acts were carried out

  19. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  20. The optimal design support system for shell components of vehicles using the methods of artificial intelligence

    Science.gov (United States)

    Szczepanik, M.; Poteralski, A.

    2016-11-01

    The paper is devoted to an application of the evolutionary methods and the finite element method to the optimization of shell structures. Optimization of thickness of a car wheel (shell) by minimization of stress functional is considered. A car wheel geometry is built from three surfaces of revolution: the central surface with the holes destined for the fastening bolts, the surface of the ring of the wheel and the surface connecting the two mentioned earlier. The last one is subjected to the optimization process. The structures are discretized by triangular finite elements and subjected to the volume constraints. Using proposed method, material properties or thickness of finite elements are changing evolutionally and some of them are eliminated. As a result the optimal shape, topology and material or thickness of the structures are obtained. The numerical examples demonstrate that the method based on evolutionary computation is an effective technique for solving computer aided optimal design.