Hybrid Biogeography Based Optimization for Constrained Numerical and Engineering Optimization
Directory of Open Access Journals (Sweden)
Zengqiang Mi
2015-01-01
Full Text Available Biogeography based optimization (BBO is a new competitive population-based algorithm inspired by biogeography. It simulates the migration of species in nature to share information. A new hybrid BBO (HBBO is presented in the paper for constrained optimization. By combining differential evolution (DE mutation operator with simulated binary crosser (SBX of genetic algorithms (GAs reasonably, a new mutation operator is proposed to generate promising solution instead of the random mutation in basic BBO. In addition, DE mutation is still integrated to update one half of population to further lead the evolution towards the global optimum and the chaotic search is introduced to improve the diversity of population. HBBO is tested on twelve benchmark functions and four engineering optimization problems. Experimental results demonstrate that HBBO is effective and efficient for constrained optimization and in contrast with other state-of-the-art evolutionary algorithms (EAs, the performance of HBBO is better, or at least comparable in terms of the quality of the final solutions and computational cost. Furthermore, the influence of the maximum mutation rate is also investigated.
Mixed-Integer Constrained Optimization Based on Memetic Algorithm
Directory of Open Access Journals (Sweden)
Y.C. Lin
2013-04-01
Full Text Available Evolutionary algorithms (EAs are population-based global search methods. They have been successfully applied to many complex optimization problems. However, EAs are frequently incapable of finding a convergence solution in default of local search mechanisms. Memetic Algorithms (MAs are hybrid EAs that combine genetic operators with local search methods. With global exploration and local exploitation in search space, MAs are capable of obtaining more high-quality solutions. On the other hand, mixed-integer hybrid differential evolution (MIHDE, as an EA-based search algorithm, has been successfully applied to many mixed-integer optimization problems. In this paper, a memetic algorithm based on MIHDE is developed for solving mixed-integer optimization problems. However, most of real-world mixed-integer optimization problems frequently consist of equality and/or inequality constraints. In order to effectively handle constraints, an evolutionary Lagrange method based on memetic algorithm is developed to solve the mixed-integer constrained optimization problems. The proposed algorithm is implemented and tested on two benchmark mixed-integer constrained optimization problems. Experimental results show that the proposed algorithm can find better optimal solutions compared with some other search algorithms. Therefore, it implies that the proposed memetic algorithm is a good approach to mixed-integer optimization problems.
Mixed-Integer Constrained Optimization Based on Memetic Algorithm
Directory of Open Access Journals (Sweden)
Y. C. Lin
2013-03-01
Full Text Available Evolutionary algorithms (EAs are population-based global search methods. They have been successfully applied tomany complex optimization problems. However, EAs are frequently incapable of finding a convergence solution indefault of local search mechanisms. Memetic Algorithms (MAs are hybrid EAs that combine genetic operators withlocal search methods. With global exploration and local exploitation in search space, MAs are capable of obtainingmore high-quality solutions. On the other hand, mixed-integer hybrid differential evolution (MIHDE, as an EA-basedsearch algorithm, has been successfully applied to many mixed-integer optimization problems. In this paper, amemetic algorithm based on MIHDE is developed for solving mixed-integer optimization problems. However, most ofreal-world mixed-integer optimization problems frequently consist of equality and/or inequality constraints. In order toeffectively handle constraints, an evolutionary Lagrange method based on memetic algorithm is developed to solvethe mixed-integer constrained optimization problems. The proposed algorithm is implemented and tested on twobenchmark mixed-integer constrained optimization problems. Experimental results show that the proposed algorithmcan find better optimal solutions compared with some other search algorithms. Therefore, it implies that the proposedmemetic algorithm is a good approach to mixed-integer optimization problems.
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Directory of Open Access Journals (Sweden)
Vivek Patel
2012-08-01
Full Text Available Nature inspired population based algorithms is a research field which simulates different natural phenomena to solve a wide range of problems. Researchers have proposed several algorithms considering different natural phenomena. Teaching-Learning-based optimization (TLBO is one of the recently proposed population based algorithm which simulates the teaching-learning process of the class room. This algorithm does not require any algorithm-specific control parameters. In this paper, elitism concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 35 constrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. The proposed algorithm can be applied to various optimization problems of the industrial environment.
A Framework for Constrained Optimization Problems Based on a Modified Particle Swarm Optimization
Directory of Open Access Journals (Sweden)
Biwei Tang
2016-01-01
Full Text Available This paper develops a particle swarm optimization (PSO based framework for constrained optimization problems (COPs. Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011 algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposed method is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.
Contingency-Constrained Optimal Power Flow Using Simplex-Based Chaotic-PSO Algorithm
Directory of Open Access Journals (Sweden)
Zwe-Lee Gaing
2011-01-01
Full Text Available This paper proposes solving contingency-constrained optimal power flow (CC-OPF by a simplex-based chaotic particle swarm optimization (SCPSO. The associated objective of CC-OPF with the considered valve-point loading effects of generators is to minimize the total generation cost, to reduce transmission loss, and to improve the bus-voltage profile under normal or postcontingent states. The proposed SCPSO method, which involves the chaotic map and the downhill simplex search, can avoid the premature convergence of PSO and escape local minima. The effectiveness of the proposed method is demonstrated in two power systems with contingency constraints and compared with other stochastic techniques in terms of solution quality and convergence rate. The experimental results show that the SCPSO-based CC-OPF method has suitable mutation schemes, thus showing robustness and effectiveness in solving contingency-constrained OPF problems.
Linearly constrained minimax optimization
DEFF Research Database (Denmark)
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
Directory of Open Access Journals (Sweden)
R. Venkata Rao
2014-01-01
Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.
Directory of Open Access Journals (Sweden)
Liyang Wang
2017-01-01
Full Text Available The application of biped robots is always trapped by their high energy consumption. This paper makes a contribution by optimizing the joint torques to decrease the energy consumption without changing the biped gaits. In this work, a constrained quadratic programming (QP problem for energy optimization is formulated. A neurodynamics-based solver is presented to solve the QP problem. Differing from the existing literatures, the proposed neurodynamics-based energy optimization (NEO strategy minimizes the energy consumption and guarantees the following three important constraints simultaneously: (i the force-moment equilibrium equation of biped robots, (ii frictions applied by each leg on the ground to hold the biped robot without slippage and tipping over, and (iii physical limits of the motors. Simulations demonstrate that the proposed strategy is effective for energy-efficient biped walking.
DEFF Research Database (Denmark)
Wang, Yong; Cai, Zixing; Zhou, Yuren
2009-01-01
A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...
A Probability Collectives Approach with a Feasibility-Based Rule for Constrained Optimization
Directory of Open Access Journals (Sweden)
Anand J. Kulkarni
2011-01-01
Full Text Available This paper demonstrates an attempt to incorporate a simple and generic constraint handling technique to the Probability Collectives (PC approach for solving constrained optimization problems. The approach of PC optimizes any complex system by decomposing it into smaller subsystems and further treats them in a distributed and decentralized way. These subsystems can be viewed as a Multi-Agent System with rational and self-interested agents optimizing their local goals. However, as there is no inherent constraint handling capability in the PC approach, a real challenge is to take into account constraints and at the same time make the agents work collectively avoiding the tragedy of commons to optimize the global/system objective. At the core of the PC optimization methodology are the concepts of Deterministic Annealing in Statistical Physics, Game Theory and Nash Equilibrium. Moreover, a rule-based procedure is incorporated to handle solutions based on the number of constraints violated and drive the convergence towards feasibility. Two specially developed cases of the Circle Packing Problem with known solutions are solved and the true optimum results are obtained at reasonable computational costs. The proposed algorithm is shown to be sufficiently robust, and strengths and weaknesses of the methodology are also discussed.
OPTIMIZED PARTICLE SWARM OPTIMIZATION BASED DEADLINE CONSTRAINED TASK SCHEDULING IN HYBRID CLOUD
Directory of Open Access Journals (Sweden)
Dhananjay Kumar
2016-01-01
Full Text Available Cloud Computing is a dominant way of sharing of computing resources that can be configured and provisioned easily. Task scheduling in Hybrid cloud is a challenge as it suffers from producing the best QoS (Quality of Service when there is a high demand. In this paper a new resource allocation algorithm, to find the best External Cloud provider when the intermediate provider’s resources aren’t enough to satisfy the customer’s demand is proposed. The proposed algorithm called Optimized Particle Swarm Optimization (OPSO combines the two metaheuristic algorithms namely Particle Swarm Optimization and Ant Colony Optimization (ACO. These metaheuristic algorithms are used for the purpose of optimization in the search space of the required solution, to find the best resource from the pool of resources and to obtain maximum profit even when the number of tasks submitted for execution is very high. This optimization is performed to allocate job requests to internal and external cloud providers to obtain maximum profit. It helps to improve the system performance by improving the CPU utilization, and handle multiple requests at the same time. The simulation result shows that an OPSO yields 0.1% - 5% profit to the intermediate cloud provider compared with standard PSO and ACO algorithms and it also increases the CPU utilization by 0.1%.
Trends in PDE constrained optimization
Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan
2014-01-01
Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics. The book is divided into five sections on “Constrained Optimization, Identification and Control”...
Constrained Multiobjective Biogeography Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hongwei Mo
2014-01-01
Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Biologically constrained optimization based cell membrane segmentation in C. elegans embryos.
Azuma, Yusuke; Onami, Shuichi
2017-06-19
Recent advances in bioimaging and automated analysis methods have enabled the large-scale systematic analysis of cellular dynamics during the embryonic development of Caenorhabditis elegans. Most of these analyses have focused on cell lineage tracing rather than cell shape dynamics. Cell shape analysis requires cell membrane segmentation, which is challenging because of insufficient resolution and image quality. This problem is currently solved by complicated segmentation methods requiring laborious and time consuming parameter adjustments. Our new framework BCOMS (Biologically Constrained Optimization based cell Membrane Segmentation) automates the extraction of the cell shape of C. elegans embryos. Both the segmentation and evaluation processes are automated. To automate the evaluation, we solve an optimization problem under biological constraints. The performance of BCOMS was validated against a manually created ground truth of the 24-cell stage embryo. The average deviation of 25 cell shape features was 5.6%. The deviation was mainly caused by membranes parallel to the focal planes, which either contact the surfaces of adjacent cells or make no contact with other cells. Because segmentation of these membranes was difficult even by manual inspection, the automated segmentation was sufficiently accurate for cell shape analysis. As the number of manually created ground truths is necessarily limited, we compared the segmentation results between two adjacent time points. Across all cells and all cell cycles, the average deviation of the 25 cell shape features was 4.3%, smaller than that between the automated segmentation result and ground truth. BCOMS automated the accurate extraction of cell shapes in developing C. elegans embryos. By replacing image processing parameters with easily adjustable biological constraints, BCOMS provides a user-friendly framework. The framework is also applicable to other model organisms. Creating the biological constraints is a
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Directory of Open Access Journals (Sweden)
Kai Yang
2016-01-01
Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.
Multi-Objective Design Optimization of an Over-Constrained Flexure-Based Amplifier
Directory of Open Access Journals (Sweden)
Yuan Ni
2015-07-01
Full Text Available The optimizing design for enhancement of the micro performance of manipulator based on analytical models is investigated in this paper. By utilizing the established uncanonical linear homogeneous equations, the quasi-static analytical model of the micro-manipulator is built, and the theoretical calculation results are tested by FEA simulations. To provide a theoretical basis for a micro-manipulator being used in high-precision engineering applications, this paper investigates the modal property based on the analytical model. Based on the finite element method, with multipoint constraint equations, the model is built and the results have a good match with the simulation. The following parametric influences studied show that the influences of other objectives on one objective are complicated. Consequently, the multi-objective optimization by the derived analytical models is carried out to find out the optimal solutions of the manipulator. Besides the inner relationships among these design objectives during the optimization process are discussed.
Directory of Open Access Journals (Sweden)
Haodong Yuan
2017-01-01
Full Text Available A novel bearing fault diagnosis method based on improved locality-constrained linear coding (LLC and adaptive PSO-optimized support vector machine (SVM is proposed. In traditional LLC, each feature is encoded by using a fixed number of bases without considering the distribution of the features and the weight of the bases. To address these problems, an improved LLC algorithm based on adaptive and weighted bases is proposed. Firstly, preliminary features are obtained by wavelet packet node energy. Then, dictionary learning with class-wise K-SVD algorithm is implemented. Subsequently, based on the learned dictionary the LLC codes can be solved using the improved LLC algorithm. Finally, SVM optimized by adaptive particle swarm optimization (PSO is utilized to classify the discriminative LLC codes and thus bearing fault diagnosis is realized. In the dictionary leaning stage, other methods such as selecting the samples themselves as dictionary and K-means are also conducted for comparison. The experiment results show that the LLC codes can effectively extract the bearing fault characteristics and the improved LLC outperforms traditional LLC. The dictionary learned by class-wise K-SVD achieves the best performance. Additionally, adaptive PSO-optimized SVM can greatly enhance the classification accuracy comparing with SVM using default parameters and linear SVM.
Directory of Open Access Journals (Sweden)
Chang-chun Dong
2016-11-01
Full Text Available The traditional foundry industry has developed rapidly in recently years due to advancements in computer technology. Modifying and designing the feeding system has become more convenient with the help of the casting software, InteCAST. A common method of designing a feeding system is to first design the initial systems, run simulations with casting software, analyze the feedback, and then redesign. In this work, genetic, fruit fly, and interior point optimizer (IPOPT algorithms were introduced to guide the optimal riser design for the feeding system. The results calculated by the three optimal algorithms indicate that the riser volume has a weak relationship with the modulus constraint; while it has a close relationship with the volume constraint. Based on the convergence rate, the fruit fly algorithm was obviously faster than the genetic algorithm. The optimized riser was also applied during casting, and was simulated using InteCAST. The numerical simulation results reveal that with the same riser volume, the riser optimized by the genetic and fruit fly algorithms has a similar improvement on casting shrinkage. The IPOPT algorithm has the advantage of causing the smallest shrinkage porosities, compared to those of the genetic and fruit fly algorithms, which were almost the same.
Nguyen, Ngoc Anh
2015-01-01
Piecewise affine (PWA) feedback control laws have received significant attention due to their relevance for the control of constrained systems, hybrid systems; equally for the approximation of nonlinear control. However, they are associated with serious implementation issues. Motivated from the interest in this class of particular controllers, this thesis is mostly related to their analysis and design.The first part of this thesis aims to compute the robustness and fragility margins for a giv...
Directory of Open Access Journals (Sweden)
Kiran Teeparthi
2017-04-01
Full Text Available In this paper, a new low level with teamwork heterogeneous hybrid particle swarm optimization and artificial physics optimization (HPSO-APO algorithm is proposed to solve the multi-objective security constrained optimal power flow (MO-SCOPF problem. Being engaged with the environmental and total production cost concerns, wind energy is highly penetrating to the main grid. The total production cost, active power losses and security index are considered as the objective functions. These are simultaneously optimized using the proposed algorithm for base case and contingency cases. Though PSO algorithm exhibits good convergence characteristic, fails to give near optimal solution. On the other hand, the APO algorithm shows the capability of improving diversity in search space and also to reach a near global optimum point, whereas, APO is prone to premature convergence. The proposed hybrid HPSO-APO algorithm combines both individual algorithm strengths, to get balance between global and local search capability. The APO algorithm is improving diversity in the search space of the PSO algorithm. The hybrid optimization algorithm is employed to alleviate the line overloads by generator rescheduling during contingencies. The standard IEEE 30-bus and Indian 75-bus practical test systems are considered to evaluate the robustness of the proposed method. The simulation results reveal that the proposed HPSO-APO method is more efficient and robust than the standard PSO and APO methods in terms of getting diverse Pareto optimal solutions. Hence, the proposed hybrid method can be used for the large interconnected power system to solve MO-SCOPF problem with integration of wind and thermal generators.
Directory of Open Access Journals (Sweden)
C. G. Shi
2015-04-01
Full Text Available In this paper, the problem of low probability of identification (LPID improvement for radar network systems is investigated. Firstly, the security information is derived to evaluate the LPID performance for radar network. Then, without any prior knowledge of hostile intercept receiver, a novel fuzzy chance-constrained programming (FCCP based security information optimization scheme is presented to achieve enhanced LPID performance in radar network systems, which focuses on minimizing the achievable mutual information (MI at interceptor, while the attainable MI outage probability at radar network is enforced to be greater than a specified confidence level. Regarding to the complexity and uncertainty of electromagnetic environment in the modern battlefield, the trapezoidal fuzzy number is used to describe the threshold of achievable MI at radar network based on the credibility theory. Finally, the FCCP model is transformed to a crisp equivalent form with the property of trapezoidal fuzzy number. Numerical simulation results demonstrating the performance of the proposed strategy are provided.
Robust optimization methods for chance constrained, simulation-based, and bilevel problems
Yanikoglu, I.
2014-01-01
The objective of robust optimization is to find solutions that are immune to the uncertainty of the parameters in a mathematical optimization problem. It requires that the constraints of a given problem should be satisfied for all realizations of the uncertain parameters in a so-called uncertainty
Algorithm Solves Constrained and Unconstrained Optimization Problems
Denson, M. A.
1985-01-01
Is quasi-Newton iteration utilizing Broyden/Fletcher/Goldfarb/Shanno update on inverse Hessian matrix. Capable of solving constrained optimization unconstrained optimization and constraints only problems with one to five independent variables from one to five constraint functions and one dependent function optimized.
Lambrakos, S. G.; Milewski, J. O.
1998-12-01
An analysis of weld morphology which typically occurs in deep penetration welding processes using electron or laser beams is presented. The method of analysis is based on geometric constraints with formal mathematical foundation within the theory of constrained parameter optimization. The analysis presented in this report serves as an example of the application of the geometric-constraints method to the analysis of weld fusion boundary morphology where there can be fragmented and incomplete information concerning material properties and only approximate information concerning the character of energy deposition, thus making a direct first principals approach difficult. A significant aspect of the geometric-constraints method is that it permits the implicit representation of information concerning temperature dependence of material properties and of coupling between heat transfer and fluid convection occurring in the weld meltpool.
Xu, Y; Li, N
2014-09-01
Biological species have produced many simple but efficient rules in their complex and critical survival activities such as hunting and mating. A common feature observed in several biological motion strategies is that the predator only moves along paths in a carefully selected or iteratively refined subspace (or manifold), which might be able to explain why these motion strategies are effective. In this paper, a unified linear algebraic formulation representing such a predator-prey relationship is developed to simplify the construction and refinement process of the subspace (or manifold). Specifically, the following three motion strategies are studied and modified: motion camouflage, constant absolute target direction and local pursuit. The framework constructed based on this varying subspace concept could significantly reduce the computational cost in solving a class of nonlinear constrained optimal trajectory planning problems, particularly for the case with severe constraints. Two non-trivial examples, a ground robot and a hypersonic aircraft trajectory optimization problem, are used to show the capabilities of the algorithms in this new computational framework.
Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization
Directory of Open Access Journals (Sweden)
Maciej Malawski
2015-01-01
Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.
Constrained Optimization in Simulation : A Novel Approach
Kleijnen, J.P.C.; van Beers, W.C.M.; van Nieuwenhuyse, I.
2008-01-01
This paper presents a novel heuristic for constrained optimization of random computer simulation models, in which one of the simulation outputs is selected as the objective to be minimized while the other outputs need to satisfy prespeci¯ed target values. Besides the simulation outputs, the
Energy Technology Data Exchange (ETDEWEB)
Wei, J [City College of New York, New York, NY (United States); Chao, M [The Mount Sinai Medical Center, New York, NY (United States)
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associated algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately
Mixed-Strategy Chance Constrained Optimal Control
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
Constrained Graph Optimization: Interdiction and Preservation Problems
Energy Technology Data Exchange (ETDEWEB)
Schild, Aaron V [Los Alamos National Laboratory
2012-07-30
The maximum flow, shortest path, and maximum matching problems are a set of basic graph problems that are critical in theoretical computer science and applications. Constrained graph optimization, a variation of these basic graph problems involving modification of the underlying graph, is equally important but sometimes significantly harder. In particular, one can explore these optimization problems with additional cost constraints. In the preservation case, the optimizer has a budget to preserve vertices or edges of a graph, preventing them from being deleted. The optimizer wants to find the best set of preserved edges/vertices in which the cost constraints are satisfied and the basic graph problems are optimized. For example, in shortest path preservation, the optimizer wants to find a set of edges/vertices within which the shortest path between two predetermined points is smallest. In interdiction problems, one deletes vertices or edges from the graph with a particular cost in order to impede the basic graph problems as much as possible (for example, delete edges/vertices to maximize the shortest path between two predetermined vertices). Applications of preservation problems include optimal road maintenance, power grid maintenance, and job scheduling, while interdiction problems are related to drug trafficking prevention, network stability assessment, and counterterrorism. Computational hardness results are presented, along with heuristic methods for approximating solutions to the matching interdiction problem. Also, efficient algorithms are presented for special cases of graphs, including on planar graphs. The graphs in many of the listed applications are planar, so these algorithms have important practical implications.
Constrained Optimization and Optimal Control for Partial Differential Equations
Leugering, Günter; Griewank, Andreas
2012-01-01
This special volume focuses on optimization and control of processes governed by partial differential equations. The contributors are mostly participants of the DFG-priority program 1253: Optimization with PDE-constraints which is active since 2006. The book is organized in sections which cover almost the entire spectrum of modern research in this emerging field. Indeed, even though the field of optimal control and optimization for PDE-constrained problems has undergone a dramatic increase of interest during the last four decades, a full theory for nonlinear problems is still lacking. The cont
A Fractional Trust Region Method for Linear Equality Constrained Optimization
Directory of Open Access Journals (Sweden)
Honglan Zhu
2016-01-01
Full Text Available A quasi-Newton trust region method with a new fractional model for linearly constrained optimization problems is proposed. We delete linear equality constraints by using null space technique. The fractional trust region subproblem is solved by a simple dogleg method. The global convergence of the proposed algorithm is established and proved. Numerical results for test problems show the efficiency of the trust region method with new fractional model. These results give the base of further research on nonlinear optimization.
A Projection Neural Network for Constrained Quadratic Minimax Optimization.
Liu, Qingshan; Wang, Jun
2015-11-01
This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.
Adaptive double chain quantum genetic algorithm for constrained optimization problems
Directory of Open Access Journals (Sweden)
Haipeng Kong
2015-02-01
Full Text Available Optimization problems are often highly constrained and evolutionary algorithms (EAs are effective methods to tackle this kind of problems. To further improve search efficiency and convergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP, adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem.
Stochastic optimal control of state constrained systems
van den Broek, Bart; Wiegerinck, Wim; Kappen, Bert
2011-03-01
In this article we consider the problem of stochastic optimal control in continuous-time and state-action space of systems with state constraints. These systems typically appear in the area of robotics, where hard obstacles constrain the state space of the robot. A common approach is to solve the problem locally using a linear-quadratic Gaussian (LQG) method. We take a different approach and apply path integral control as introduced by Kappen (Kappen, H.J. (2005a), 'Path Integrals and Symmetry Breaking for Optimal Control Theory', Journal of Statistical Mechanics: Theory and Experiment, 2005, P11011; Kappen, H.J. (2005b), 'Linear Theory for Control of Nonlinear Stochastic Systems', Physical Review Letters, 95, 200201). We use hybrid Monte Carlo sampling to infer the control. We introduce an adaptive time discretisation scheme for the simulation of the controlled dynamics. We demonstrate our approach on two examples, a simple particle in a halfspace and a more complex two-joint manipulator, and we show that in a high noise regime our approach outperforms the iterative LQG method.
A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization
Directory of Open Access Journals (Sweden)
Zhijun Luo
2014-01-01
Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.
Regularized Primal-Dual Subgradient Method for Distributed Constrained Optimization.
Yuan, Deming; Ho, Daniel W C; Xu, Shengyuan
2016-09-01
In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.
A model for optimal constrained adaptive testing
van der Linden, Willem J.; Reese, Lynda M.
2001-01-01
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum
Total energy control system autopilot design with constrained parameter optimization
Ly, Uy-Loi; Voth, Christopher
1990-01-01
A description is given of the application of a multivariable control design method (SANDY) based on constrained parameter optimization to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the direct synthesis of a multiloop AFCS inner-loop feedback control system based on total energy control system (TECS) principles. The design procedure offers a structured approach for the determination of a set of stabilizing controller design gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The approach can be extended to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by proper formulation of the design objectives and constraints. Satisfactory designs are usually obtained in few iterations. Performance characteristics of the optimized TECS design have been improved, particularly in the areas of closed-loop damping and control activity in the presence of turbulence.
Robust Constrained Blackbox Optimization with Surrogates
2015-05-21
work on non-smooth optimization to optimal control. • Andrea Ianni , a PhD Student of the Sapienza University of Rome under the supervision of Stefano...variables of the Mesh Adaptive Direct Search algorithm. Optimization Letters, 8(5):1599–1610, 2014. 9. C. Audet, A. Ianni , S. Le Digabel, and C. Tribes...the Mesh Adaptive Direct Search algorithm. Optimization Letters, 8(5), 1599-1610, June 2014. 9) C. Audet, A. Ianni , S. Le Digabel and C. Tribes
Energetic Materials Optimization via Constrained Search
2015-06-01
References 7 Appendix. Listings 11 List of Symbols , Abbreviations, and Acronyms 26 Distribution List 27 iii List of Figures Fig. 1 Optimization framework...4 Fig. 2 Flowchart of algorithm...............................................................5 Fig. 3...Stopping criteria? d = n? Stop d = 1, λ = 0 yes no d = 1 yes no d = d+ 1 Fig. 2 Flowchart of algorithm 4. Results and Discussion If TED is optimized
Nguyen, Q. H.; Choi, S. B.
2012-01-01
This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel-Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given.
Optimal Control Strategies for Constrained Relative Orbits
National Research Council Canada - National Science Library
Irvin , Jr, David J
2007-01-01
.... This research finds optimal trajectories, produced with discrete-thrusts, that minimize fuel spent per unit time and stay within the user-defened volume, thus providing a practical hover capability...
Security constrained optimal power flow by modern optimization tools
African Journals Online (AJOL)
... of each contingency using flower pollination algorithms (FPA) as a new trend. Case studies based on IEEE 30 bus system show that the discussed techniques are advantageous and can guarantee operational reliability and economy. Keywords: Optimal power flow, security, genetic algorithm, flower pollination algorithm ...
Fast optimization of statistical potentials for structurally constrained phylogenetic models
Directory of Open Access Journals (Sweden)
Rodrigue Nicolas
2009-09-01
Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.
Constrained Optimization of MIMO Training Sequences
Directory of Open Access Journals (Sweden)
Magnus Sandell
2007-01-01
Full Text Available Multiple-input multiple-output (MIMO systems have shown a huge potential for increased spectral efficiency and throughput. With an increasing number of transmitting antennas comes the burden of providing training for channel estimation for coherent detection. In some special cases optimal, in the sense of mean-squared error (MSE, training sequences have been designed. However, in many practical systems it is not feasible to analytically find optimal solutions and numerical techniques must be used. In this paper, two systems (unique word (UW single carrier and OFDM with nulled subcarriers are considered and a method of designing near-optimal training sequences using nonlinear optimization techniques is proposed. In particular, interior-point (IP algorithms such as the barrier method are discussed. Although the two systems seem unrelated, the cost function, which is the MSE of the channel estimate, is shown to be effectively the same for each scenario. Also, additional constraints, such as peak-to-average power ratio (PAPR, are considered and shown to be easily included in the optimization process. Numerical examples illustrate the effectiveness of the designed training sequences, both in terms of MSE and bit-error rate (BER.
Effective Teaching of Economics: A Constrained Optimization Problem?
Hultberg, Patrik T.; Calonge, David Santandreu
2017-01-01
One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…
A Collective Neurodynamic Approach to Constrained Global Optimization.
Yan, Zheng; Fan, Jianchao; Wang, Jun
2017-05-01
Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush-Kuhn-Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.
An optimization algorithm inspired by musical composition in constrained optimization problems
Directory of Open Access Journals (Sweden)
Roman Anselmo Mora-Gutiérrez
2013-08-01
Full Text Available Many real-world problems can be expressed as an instance of the constrained nonlinear optimization problem (CNOP. This problem has a set of constraints specifies the feasible solution space. In the last years several algorithms have been proposed and developed for tackling CNOP. In this paper, we present a cultural algorithm for constrained optimization, which is an adaptation of “Musical Composition Method” or MCM, which was proposed in [33] by Mora et al. We evaluated and analyzed the performance of MCM on five test cases benchmark of the CNOP. Numerical results were compared to evolutionary algorithm based on homomorphous mapping [23], Artificial Immune System [9] and anti-culture population algorithm [39]. The experimental results demonstrate that MCM significantly improves the global performances of the other tested metaheuristics on same of benchmark functions.
Jones, Keith
2010-01-01
The Regularized Fast Hartley Transform provides the reader with the tools necessary to both understand the proposed new formulation and to implement simple design variations that offer clear implementational advantages, both practical and theoretical, over more conventional complex-data solutions to the problem. The highly-parallel formulation described is shown to lead to scalable and device-independent solutions to the latency-constrained version of the problem which are able to optimize the use of the available silicon resources, and thus to maximize the achievable computational density, th
A New Interpolation Approach for Linearly Constrained Convex Optimization
Espinoza, Francisco
2012-08-01
In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.
Neuroevolutionary Constrained Optimization for Content Creation
DEFF Research Database (Denmark)
Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian
2011-01-01
This paper presents a constraint-based procedural content generation (PCG) framework used for the creation of novel and high-performing content. Specifically, we examine the efficiency of the framework for the creation of spaceship design (hull shape and spaceship attributes such as weapon...... and survival tasks and are also visually appealing....
A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization.
Qin, Sitian; Feng, Jiqiang; Song, Jiahui; Wen, Xingnan; Xu, Chen
2016-12-22
In this paper, based on CR calculus and penalty method, a one-layer recurrent neural network is proposed for solving constrained complex-variable convex optimization. It is proved that for any initial point from a given domain, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution of the constrained complex-variable convex optimization finally. In contrast to existing neural networks for complex-variable convex optimization, the proposed neural network has a lower model complexity and better convergence. Some numerical examples and application are presented to substantiate the effectiveness of the proposed neural network.
Improved Differential Evolution with Shrinking Space Technique for Constrained Optimization
Fu, Chunming; Xu, Yadong; Jiang, Chao; Han, Xu; Huang, Zhiliang
2017-05-01
Most of the current evolutionary algorithms for constrained optimization algorithm are low computational efficiency. In order to improve efficiency, an improved differential evolution with shrinking space technique and adaptive trade-off model, named ATMDE, is proposed to solve constrained optimization problems. The proposed ATMDE algorithm employs an improved differential evolution as the search optimizer to generate new offspring individuals into evolutionary population. For the constraints, the adaptive trade-off model as one of the most important constraint-handling techniques is employed to select better individuals to retain into the next population, which could effectively handle multiple constraints. Then the shrinking space technique is designed to shrink the search region according to feedback information in order to improve computational efficiency without losing accuracy. The improved DE algorithm introduces three different mutant strategies to generate different offspring into evolutionary population. Moreover, a new mutant strategy called "DE/rand/best/1" is constructed to generate new individuals according to the feasibility proportion of current population. Finally, the effectiveness of the proposed method is verified by a suite of benchmark functions and practical engineering problems. This research presents a constrained evolutionary algorithm with high efficiency and accuracy for constrained optimization problems.
Robust integrated autopilot/autothrottle design using constrained parameter optimization
Ly, Uy-Loi; Voth, Christopher; Sanjay, Swamy
1990-01-01
A multivariable control design method based on constrained parameter optimization was applied to the design of a multiloop aircraft flight control system. Specifically, the design method is applied to the following: (1) direct synthesis of a multivariable 'inner-loop' feedback control system based on total energy control principles; (2) synthesis of speed/altitude-hold designs as 'outer-loop' feedback/feedforward control systems around the above inner loop; and (3) direct synthesis of a combined 'inner-loop' and 'outer-loop' multivariable control system. The design procedure offers a direct and structured approach for the determination of a set of controller gains that meet design specifications in closed-loop stability, command tracking performance, disturbance rejection, and limits on control activities. The presented approach may be applied to a broader class of multiloop flight control systems. Direct tradeoffs between many real design goals are rendered systematic by this method following careful problem formulation of the design objectives and constraints. Performance characteristics of the optimization design were improved over the current autopilot design on the B737-100 Transport Research Vehicle (TSRV) at the landing approach and cruise flight conditions; particularly in the areas of closed-loop damping, command responses, and control activity in the presence of turbulence.
Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel
Directory of Open Access Journals (Sweden)
Zhiwen Hu
2015-01-01
Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.
A Collective Neurodynamic Approach to Distributed Constrained Optimization.
Liu, Qingshan; Yang, Shaofu; Wang, Jun
2017-08-01
This paper presents a collective neurodynamic approach with multiple interconnected recurrent neural networks (RNNs) for distributed constrained optimization. The objective function of the distributed optimization problems to be solved is a sum of local convex objective functions, which may be nonsmooth. Subject to its local constraints, each local objective function is minimized individually by using an RNN, with consensus among others. In contrast to existing continuous-time distributed optimization methods, the proposed collective neurodynamic approach is capable of solving more general distributed optimization problems. Simulation results on three numerical examples are discussed to substantiate the effectiveness and characteristics of the proposed approach. In addition, an application to the optimal placement problem is delineated to demonstrate the viability of the approach.
The expanded Lagrangian system for constrained optimization problems
Poore, A. B.; Al-Hassan, Q.
1988-01-01
Smooth penalty functions can be combined with numerical continuation/bifurcation techniques to produce a class of robust and fast algorithms for constrained optimization problems. The key to the development of these algorithms is the Expanded Lagrangian System which is derived and analyzed in this work. This parameterized system of nonlinear equations contains the penalty path as a solution, provides a smooth homotopy into the first-order necessary conditions, and yields a global optimization technique. Furthermore, the inevitable ill-conditioning present in a sequential optimization algorithm is removed for three penalty methods: the quadratic penalty function for equality constraints, and the logarithmic barrier function (an interior method) and the quadratic loss function (an interior method) for inequality constraints. Although these techniques apply to optimization in general and to linear and nonlinear programming, calculus of variations, optimal control and parameter identification in particular, the development is primarily within the context of nonlinear programming.
Directory of Open Access Journals (Sweden)
Enrico Sciubba
2011-06-01
Full Text Available In this paper, the entropy generation minimization (EGM method is applied to an industrial heat transfer problem: the forced convective cooling of a LED-based spotlight. The design specification calls for eighteen diodes arranged on a circular copper plate of 35 mm diameter. Every diode dissipates 3 W and the maximum allowedtemperature of the plate is 80 °C. The cooling relies on the forced convection driven by a jet of air impinging on the plate. An initial complex geometry of plate fins is presented and analyzed with a commercial CFD code that computes the entropy generation rate. A pseudo-optimization process is carried out via a successive series of design modifications based on a careful analysis of the entropy generation maps. One of the advantages of the EGM method is that the rationale behind each step of the design process can be justified on a physical basis. It is found that the best performance is attained when the fins are periodically spaced in the radial direction.
Zhang, Yaxiong; Nie, Xianling
2017-06-08
Constrained background bilinearization (CBBL) method was applied for multivariate calibration analysis of the grey analytical system in high performance liquid chromatography (HPLC). By including the variables of the concentrations and the retention time of the analytes simultaneously, the standard CBBL was modified for the multivariate calibration of the HPLC system with poor retention precision. The CBBL was optimized globally by genetic algorithm (GA). That is to say, both the concentrations and the retention times of the analytes were optimized globally and simultaneously by GA. The modified CBBL was applied in the calibration analysis for both simulated and experimental HPLC system with poor retention precision. The experimental data were collected from HPLC separation system for phenolic compounds. The modified CBBL was verified to be useful to prevent the inherent limitation of the standard CBBL, which means that the standard CBBL may result in poor calibration results in the case of poor retention precision in chromatography system. Moreover, the modified CBBL can give not only the concentrations but also the retention time of the analytes. i. e., more useful information of the analytes can be generated by the modified CBBL. Subsequently, nearly ideal calibration results were obtained. On the other hand, comparing with the calibration results by the classical rank annihilation factor analysis (RAFA) and residual bilinearization (RBL) method, the results given by the modified CBBL were also improved significantly for the HPLC systems studied in this work.
Security constrained optimal power flow by modern optimization tools
African Journals Online (AJOL)
the individual of constant control factors structure for tackling the OPF issue with the smooth fuel cost of generator. ... 2.3 Dependent Variables: The uncontrolled variables of an optimal power flow which are free, within limits as the magnitude and ..... International Journal on Electrical Power and Energy Systems, Vol. 33, No.
Constrained optimal duct shapes for conjugate laminar forced convection
Energy Technology Data Exchange (ETDEWEB)
Fisher, T.S.; Torrance, K.E. [Cornell Univ., Sibley School of Mechanical and Aerospace Engineering, Ithaca, NY (United States)
2000-01-01
The complex variable boundary element method (CVBEM) is used to analyse conjugate heat transfer in solids with cooling passages of general, convex cross section. The method is well-suited to duct cross sections with high curvature and high aspect ratios because the whole-domain boundary integrals are path independent and analytic. The effects of channel boundary curvature on overall heat transfer are quantified for the first time. Shape-constrained optimal solutions involving fixed pressure drop and fixed pump work are presented. Increased channel boundary curvature is shown to decrease the optimal distance between parallel channels by increasing fin efficiency. (Author)
Stress-constrained topology optimization for compliant mechanism design
DEFF Research Database (Denmark)
de Leon, Daniel M.; Alexandersen, Joe; Jun, Jun S.
2015-01-01
This article presents an application of stress-constrained topology optimization to compliant mechanism design. An output displacement maximization formulation is used, together with the SIMP approach and a projection method to ensure convergence to nearly discrete designs. The maximum stress...... is approximated using a normalized version of the commonly-used p-norm of the effective von Mises stresses. The usual problems associated with topology optimization for compliant mechanism design: one-node and/or intermediate density hinges are alleviated by the stress constraint. However, it is also shown...
Steepest-Ascent Constrained Simultaneous Perturbation for Multiobjective Optimization
DEFF Research Database (Denmark)
McClary, Dan; Syrotiuk, Violet; Kulahci, Murat
2011-01-01
The simultaneous optimization of multiple responses in a dynamic system is challenging. When a response has a known gradient, it is often easily improved along the path of steepest ascent. On the contrary, a stochastic approximation technique may be used when the gradient is unknown or costly...... that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)(2) to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self-organizing wireless network. The results show that SP...
Convex Relaxations of Chance Constrained AC Optimal Power Flow
DEFF Research Database (Denmark)
Venzke, Andreas; Halilbasic, Lejla; Markovic, Uros
2017-01-01
High penetration of renewable energy sources and the increasing share of stochastic loads require the explicit representation of uncertainty in tools such as the optimal power ﬂow (OPF).Current approaches follow either a linearized approach or an iterative approximation of non-linearities. This p......High penetration of renewable energy sources and the increasing share of stochastic loads require the explicit representation of uncertainty in tools such as the optimal power ﬂow (OPF).Current approaches follow either a linearized approach or an iterative approximation of non......-linearities. This paper proposes a semideﬁnite relaxation of a chance constrained AC-OPF which is able to provide guarantees for global optimality. Using a piecewise afﬁne policy, we can ensure tractability, accurately model large power deviations, and determine suitable corrective control policies for active power......-global optimality guarantees....
de Melo, Vinícius Veloso
2017-01-01
Several constrained optimization problems have been adequately solved over the years thanks to advances in the metaheuristics area. In this paper, we evaluate a novel self-adaptive and auto-constructive metaheuristic called Drone Squadron Optimization (DSO) in solving constrained engineering design problems. This paper evaluates DSO with death penalty on three widely tested engineering design problems. Results show that the proposed approach is competitive with some very popular metaheuristics.
Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization
Directory of Open Access Journals (Sweden)
Weishang Gao
2013-01-01
Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.
Flexible waveform-constrained optimization design method for cognitive radar
Zhang, Xiaowen; Wang, Kaizhi; Liu, Xingzhao
2017-07-01
The problem of waveform optimization design for cognitive radar (CR) in the presence of extended target with unknown target impulse response (TIR) is investigated. On the premise of ensuring the TIR estimation precision, a flexible waveform-constrained optimization design method taking both target detection and range resolution into account is proposed. In this method, both the estimate of TIR and transmitted waveform can be updated according to the environment information fed back by the receiver. Moreover, rather than optimizing waveforms for a single design criterion, the framework can synthesize waveforms that provide a trade-off between competing design criteria. The trade-off is determined by the parameter settings, which can be adjusted according to the requirement of radar performance in each cycle of CR. Simulation results demonstrate that CR with the proposed waveform performs better than a traditional radar system with a fixed waveform and offers more flexibility and practicability.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-12-01
PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Adaptive Multi-Agent Systems for Constrained Optimization
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Depletion mapping and constrained optimization to support managing groundwater extraction
Fienen, Michael N.; Bradbury, Kenneth R.; Kniffin, Maribeth; Barlow, Paul M.
2018-01-01
Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping—showing the model-calculated potential impacts that wells have on stream baseflow—can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States—home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices.
Asynchronous parallel generating set search for linearly-constrained optimization.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara G.; Griffin, Joshua; Lewis, Robert Michael
2007-04-01
We describe an asynchronous parallel derivative-free algorithm for linearly-constrained optimization. Generating set search (GSS) is the basis of ourmethod. At each iteration, a GSS algorithm computes a set of search directionsand corresponding trial points and then evaluates the objective function valueat each trial point. Asynchronous versions of the algorithm have been developedin the unconstrained and bound-constrained cases which allow the iterations tocontinue (and new trial points to be generated and evaluated) as soon as anyother trial point completes. This enables better utilization of parallel resourcesand a reduction in overall runtime, especially for problems where the objec-tive function takes minutes or hours to compute. For linearly-constrained GSS,the convergence theory requires that the set of search directions conform to the3 nearby boundary. The complexity of developing the asynchronous algorithm forthe linearly-constrained case has to do with maintaining a suitable set of searchdirections as the search progresses and is the focus of this research. We describeour implementation in detail, including how to avoid function evaluations bycaching function values and using approximate look-ups. We test our imple-mentation on every CUTEr test problem with general linear constraints and upto 1000 variables. Without tuning to individual problems, our implementationwas able to solve 95% of the test problems with 10 or fewer variables, 75%of the problems with 11-100 variables, and nearly half of the problems with100-1000 variables. To the best of our knowledge, these are the best resultsthat have ever been achieved with a derivative-free method. Our asynchronousparallel implementation is freely available as part of the APPSPACK software.4
Minggang Dong; Ning Wang; Xiaohui Cheng; Chuanxian Jiang
2014-01-01
Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE) for constrained optimization problems (COPs). More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex ...
Active flutter control using discrete optimal constrained dynamic compensators
Broussard, J. R.; Halyo, N.
1983-01-01
A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A recently developd convergent algorithm is employed to determine constrained control law parameters that minimize an infinite-time discrete quadratic performance index. Low-order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Results from investigations into sample rate variation, prefilter pole variation, and effects of varying flight condtions are discussed. The study indicates that a digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.
Directory of Open Access Journals (Sweden)
Minggang Dong
2014-01-01
Full Text Available Motivated by recent advancements in differential evolution and constraints handling methods, this paper presents a novel modified oracle penalty function-based composite differential evolution (MOCoDE for constrained optimization problems (COPs. More specifically, the original oracle penalty function approach is modified so as to satisfy the optimization criterion of COPs; then the modified oracle penalty function is incorporated in composite DE. Furthermore, in order to solve more complex COPs with discrete, integer, or binary variables, a discrete variable handling technique is introduced into MOCoDE to solve complex COPs with mix variables. This method is assessed on eleven constrained optimization benchmark functions and seven well-studied engineering problems in real life. Experimental results demonstrate that MOCoDE achieves competitive performance with respect to some other state-of-the-art approaches in constrained optimization evolutionary algorithms. Moreover, the strengths of the proposed method include few parameters and its ease of implementation, rendering it applicable to real life. Therefore, MOCoDE can be an efficient alternative to solving constrained optimization problems.
Constrained Multi-Level Algorithm for Trajectory Optimization
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in
Distributed Stochastic Approximation for Constrained and Unconstrained Optimization
Bianchi, Pascal
2011-01-01
In this paper, we analyze the convergence of a distributed Robbins-Monro algorithm for both constrained and unconstrained optimization in multi-agent systems. The algorithm searches local minima of a (nonconvex) objective function which is supposed to coincide with a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. It is proved that i) an agreement is achieved between agents on the value of the estimate, ii) the algorithm converges to the set of Kuhn-Tucker points of the optimization problem. The proof relies on recent results about differential inclusions. In the context of unconstrained optimization, intelligible sufficient conditions are provided in order to ensure the stability of the algorithm. In the latter case, we also provide a central limit theorem which governs the asymptotic fluctuations of the estimate. We illustrate our results in the...
Block-triangular preconditioners for PDE-constrained optimization
Rees, Tyrone
2010-11-26
In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.
A constraint consensus memetic algorithm for solving constrained optimization problems
Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.
2014-11-01
Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.
Neural network for constrained nonsmooth optimization using Tikhonov regularization.
Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun
2015-03-01
This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.
A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.
Yan, Zheng; Wang, Jun; Li, Guocheng
2014-07-01
This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lee, Cheng-Ling; Lee, Ray-Kuang; Kao, Yee-Mou
2006-11-13
We present the synthesis of multi-channel fiber Bragg grating (MCFBG) filters for dense wavelength-division-multiplexing (DWDM) application by using a simple optimization approach based on a Lagrange multiplier optimization (LMO) method. We demonstrate for the first time that the LMO method can be used to constrain various parameters of the designed MCFBG filters for practical application demands and fabrication requirements. The designed filters have a number of merits, i.e., flat-top and low dispersion spectral response as well as single stage. Above all, the maximum amplitude of the index modulation profiles of the designed MCFBGs can be substantially reduced under the applied constrained condition. The simulation results demonstrate that the LMO algorithm can provide a potential alternative for complex fiber grating filter design problems.
Optimal Stabilization of A Quadrotor UAV by a Constrained Fuzzy Control and PSO
Directory of Open Access Journals (Sweden)
Boubertakh Hamid
2017-01-01
Full Text Available This work aims to design an optimal fuzzy PD (FPD control for the attitude and altitude stabilization of a quadrotor. The control design is done by mean of the particle swarm optimization (PSO under the constraints of the controller interpretability and the saturation of the actuators. Concretely, a decentralized control structure is adopted where four FPD controllers are used to stabilize the quadrotor angles (roll, pitch and yaw and height. A PSO-based algorithm is used to simultaneously tune the four constrained controllers regarding a cost function quantifying the whole system performances. The simulation results are presented to show the efficiency of the proposed approach.
Constrained Burn Optimization for the International Space Station
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
Directory of Open Access Journals (Sweden)
Samiran Karmakar
2014-07-01
Full Text Available An alternative optimization technique via multiobjective programming for constrained optimization problems with interval-valued objectives has been proposed. Reduction of interval objective functions to those of noninterval (crisp one is the main ingredient of the proposed technique. At first, the significance of interval-valued objective functions along with the meaning of interval-valued solutions of the proposed problem has been explained graphically. Generally, the proposed problems have infinitely many compromise solutions. The objective is to obtain one of such solutions with higher accuracy and lower computational effort. Adequate number of numerical examples has been solved in support of this technique.
Chance-constrained optimization of demand response to price signals
DEFF Research Database (Denmark)
Dorini, Gianluca Fabio; Pinson, Pierre; Madsen, Henrik
2013-01-01
Household-based demand response is expected to play an increasing role in supporting the large scale integration of renewable energy generation in existing power systems and electricity markets. While the direct control of the consumption level of households is envisaged as a possibility......, a credible alternative is that of indirect control based on price signals to be sent to these end-consumers. A methodology is described here allowing to estimate in advance the potential response of flexible end-consumers to price variations, subsequently embedded in an optimal price-signal generator....... In contrast to some real-time pricing proposals in the literature, here prices are estimated and broadcast once a day for the following one, for households to optimally schedule their consumption. The price-response is modeled using stochastic finite impulse response (FIR) models. Parameters are estimated...
Efficient relaxations for joint chance constrained AC optimal power flow
Energy Technology Data Exchange (ETDEWEB)
Baker, Kyri; Toomey, Bridget
2017-07-01
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality as an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.
A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint
Energy Technology Data Exchange (ETDEWEB)
Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain
2017-07-25
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.
Robust Linear Neural Network for Constrained Quadratic Optimization
Directory of Open Access Journals (Sweden)
Zixin Liu
2017-01-01
Full Text Available Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper.
Thermodynamics constrains allometric scaling of optimal development time in insects.
Directory of Open Access Journals (Sweden)
Michael E Dillon
Full Text Available Development time is a critical life-history trait that has profound effects on organism fitness and on population growth rates. For ectotherms, development time is strongly influenced by temperature and is predicted to scale with body mass to the quarter power based on 1 the ontogenetic growth model of the metabolic theory of ecology which describes a bioenergetic balance between tissue maintenance and growth given the scaling relationship between metabolism and body size, and 2 numerous studies, primarily of vertebrate endotherms, that largely support this prediction. However, few studies have investigated the allometry of development time among invertebrates, including insects. Abundant data on development of diverse insects provides an ideal opportunity to better understand the scaling of development time in this ecologically and economically important group. Insects develop more quickly at warmer temperatures until reaching a minimum development time at some optimal temperature, after which development slows. We evaluated the allometry of insect development time by compiling estimates of minimum development time and optimal developmental temperature for 361 insect species from 16 orders with body mass varying over nearly 6 orders of magnitude. Allometric scaling exponents varied with the statistical approach: standardized major axis regression supported the predicted quarter-power scaling relationship, but ordinary and phylogenetic generalized least squares did not. Regardless of the statistical approach, body size alone explained less than 28% of the variation in development time. Models that also included optimal temperature explained over 50% of the variation in development time. Warm-adapted insects developed more quickly, regardless of body size, supporting the "hotter is better" hypothesis that posits that ectotherms have a limited ability to evolutionarily compensate for the depressing effects of low temperatures on rates of
Thermodynamics constrains allometric scaling of optimal development time in insects.
Dillon, Michael E; Frazier, Melanie R
2013-01-01
Development time is a critical life-history trait that has profound effects on organism fitness and on population growth rates. For ectotherms, development time is strongly influenced by temperature and is predicted to scale with body mass to the quarter power based on 1) the ontogenetic growth model of the metabolic theory of ecology which describes a bioenergetic balance between tissue maintenance and growth given the scaling relationship between metabolism and body size, and 2) numerous studies, primarily of vertebrate endotherms, that largely support this prediction. However, few studies have investigated the allometry of development time among invertebrates, including insects. Abundant data on development of diverse insects provides an ideal opportunity to better understand the scaling of development time in this ecologically and economically important group. Insects develop more quickly at warmer temperatures until reaching a minimum development time at some optimal temperature, after which development slows. We evaluated the allometry of insect development time by compiling estimates of minimum development time and optimal developmental temperature for 361 insect species from 16 orders with body mass varying over nearly 6 orders of magnitude. Allometric scaling exponents varied with the statistical approach: standardized major axis regression supported the predicted quarter-power scaling relationship, but ordinary and phylogenetic generalized least squares did not. Regardless of the statistical approach, body size alone explained less than 28% of the variation in development time. Models that also included optimal temperature explained over 50% of the variation in development time. Warm-adapted insects developed more quickly, regardless of body size, supporting the "hotter is better" hypothesis that posits that ectotherms have a limited ability to evolutionarily compensate for the depressing effects of low temperatures on rates of biological processes. The
Energy Technology Data Exchange (ETDEWEB)
Yang, Chao; Jiang, Wen; Chen, Dong-Hua; Adiga, Umesh; Ng, Esmond G.; Chiu, Wah
2008-07-28
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Fuzzy Constrained Predictive Optimal Control of High Speed Train with Actuator Dynamics
Directory of Open Access Journals (Sweden)
Xi Wang
2016-01-01
Full Text Available We investigate the problem of fuzzy constrained predictive optimal control of high speed train considering the effect of actuator dynamics. The dynamics feature of the high speed train is modeled as a cascade of cars connected by flexible couplers, and the formulation is mathematically transformed into a Takagi-Sugeno (T-S fuzzy model. The goal of this study is to design a state feedback control law at each decision step to enhance safety, comfort, and energy efficiency of high speed train subject to safety constraints on the control input. Based on Lyapunov stability theory, the problem of optimizing an upper bound on the cruise control cost function subject to input constraints is reduced to a convex optimization problem involving linear matrix inequalities (LMIs. Furthermore, we analyze the influences of second-order actuator dynamics on the fuzzy constrained predictive controller, which shows risk of potentially deteriorating the overall system. Employing backstepping method, an actuator compensator is proposed to accommodate for the influence of the actuator dynamics. The experimental results show that with the proposed approach high speed train can track the desired speed, the relative coupler displacement between the neighbouring cars is stable at the equilibrium state, and the influence of actuator dynamics is reduced, which demonstrate the validity and effectiveness of the proposed approaches.
Wang, Kun-Yu; Chang, Tsung-Hui; Ma, Wing-Kin; Chi, Chong-Yung
2011-01-01
In this paper we consider a probabilistic signal-to-interference and-noise ratio (SINR) constrained problem for transmit beamforming design in the presence of imperfect channel state information (CSI), under a multiuser multiple-input single-output (MISO) downlink scenario. In particular, we deal with outage-based quality-of-service constraints, where the probability of each user's SINR not satisfying a service requirement must not fall below a given outage probability specification. The study of solution approaches to the probabilistic SINR constrained problem is important because CSI errors are often present in practical systems and they may cause substantial SINR outages if not handled properly. However, a major technical challenge is how to process the probabilistic SINR constraints. To tackle this, we propose a novel relaxation- restriction (RAR) approach, which consists of two key ingredients-semidefinite relaxation (SDR), and analytic tools for conservatively approximating probabilistic constraints. Th...
Exact Solution of a Constrained Optimization Problem in Thermoelectric Cooling
National Research Council Canada - National Science Library
Wang, Hongyun; Zhou, Hong
2008-01-01
We consider an optimization problem in thermoelectric cooling. The maximum achievable cooling temperature in thermoelectric cooling is, among other things, affected by the Seebeck coefficient profile of the inhomogeneous materials...
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
A constrained optimization framework for compliant underactuated grasping
Directory of Open Access Journals (Sweden)
M. Ciocarlie
2011-02-01
Full Text Available This study focuses on the design and analysis of underactuated robotic hands that use tendons and compliant joints to enable passive mechanical adaptation during grasping tasks. We use a quasistatic equilibrium formulation to predict the stability of a given grasp. This method is then used as the inner loop of an optimization algorithm that can find a set of actuation mechanism parameters that optimize the stability measure for an entire set of grasps. We discuss two possible approaches to design optimization using this framework, one using exhaustive search over the parameter space, and the other using a simplified gripper construction to cast the problem to a form that is directly solvable using well-established optimization methods. Computations are performed in 3-D, allow arbitrary geometry of the grasped objects and take into account frictional constraints.
This paper was presented at the IFToMM/ASME International Workshop on Underactuated Grasping (UG2010, 19 August 2010, Montréal, Canada.
Improved Sensitivity Relations in State Constrained Optimal Control
Energy Technology Data Exchange (ETDEWEB)
Bettiol, Piernicola, E-mail: piernicola.bettiol@univ-brest.fr [Université de Bretagne Occidentale, Laboratoire de Mathematiques (France); Frankowska, Hélène, E-mail: frankowska@math.jussieu.fr [Université Pierre et Marie Curie (Paris 6), CNRS and Institut de Mathématiques de Jussieu (France); Vinter, Richard B., E-mail: r.vinter@imperial.ac.uk [Imperial College London, Department of Electrical and Electronic Engineering (United Kingdom)
2015-04-15
Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjoint state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proof, and because
Optimal Constrained Stationary Intervention in Gene Regulatory Networks
Directory of Open Access Journals (Sweden)
Golnaz Vahedi
2008-05-01
Full Text Available A key objective of gene network modeling is to develop intervention strategies to alter regulatory dynamics in such a way as to reduce the likelihood of undesirable phenotypes. Optimal stationary intervention policies have been developed for gene regulation in the framework of probabilistic Boolean networks in a number of settings. To mitigate the possibility of detrimental side effects, for instance, in the treatment of cancer, it may be desirable to limit the expected number of treatments beneath some bound. This paper formulates a general constraint approach for optimal therapeutic intervention by suitably adapting the reward function and then applies this formulation to bound the expected number of treatments. A mutated mammalian cell cycle is considered as a case study.
Numerical Analysis of Constrained, Time-Optimal Satellite Reorientation
Directory of Open Access Journals (Sweden)
Robert G. Melton
2012-01-01
Full Text Available Previous work on time-optimal satellite slewing maneuvers, with one satellite axis (sensor axis required to obey multiple path constraints (exclusion from keep-out cones centered on high-intensity astronomical sources reveals complex motions with no part of the trajectory touching the constraint boundaries (boundary points or lying along a finite arc of the constraint boundary (boundary arcs. This paper examines four cases in which the sensor axis is either forced to follow a boundary arc, or has initial and final directions that lie on the constraint boundary. Numerical solutions, generated via a Legendre pseudospectral method, show that the forced boundary arcs are suboptimal. Precession created by the control torques, moving the sensor axis away from the constraint boundary, results in faster slewing maneuvers. A two-stage process is proposed for generating optimal solutions in less time, an important consideration for eventual onboard implementation.
Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption
Mathavaraj, S.; Pandiyan, R.; Padhi, R.
2017-12-01
A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.
Mesh Adaptive Direct Search Methods for Constrained Nonsmooth Optimization
2012-02-24
presence will extend our collaboration circle to mechanical engineering researchers. • We have initiated a new collaboration with A.D. Pelton from chemi...Published: 1. A.E. Gheribi, C. Audet, S. Le Digabel, E. Blisle, C.W. Bale and A. D. Pelton . Calculating optimal conditions for alloy and process...Gheribi, C. Robelin, S. Le Digabel, C. Audet and A.D. Pelton . Calculating All Local Minima on Liquidus Surfaces Using the FactSage Software and Databases
Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space
2015-05-01
12 Appendix. Listings 17 List of Symbols , Abbreviations, and Acronyms 31 Distribution List 32 iii List of Figures Fig. 1 Optimization framework: Each...X may be replaced by -H, -F, -Cl, or -Br for a total of 216 possible molecules. ..............................................4 Fig. 2 Flowchart of...Stopping criteria? d = n? Stop d = 1, λ = 0 yes no d = 1 yes no d = d+ 1 Fig. 2 Flowchart of algorithm • Algorithm 1: Complete a full sweep of all
Preconditioning for partial differential equation constrained optimization with control constraints
Stoll, Martin
2011-10-18
Optimal control problems with partial differential equations play an important role in many applications. The inclusion of bound constraints for the control poses a significant additional challenge for optimization methods. In this paper, we propose preconditioners for the saddle point problems that arise when a primal-dual active set method is used. We also show for this method that the same saddle point system can be derived when the method is considered as a semismooth Newton method. In addition, the projected gradient method can be employed to solve optimization problems with simple bounds, and we discuss the efficient solution of the linear systems in question. In the case when an acceleration technique is employed for the projected gradient method, this again yields a semismooth Newton method that is equivalent to the primal-dual active set method. We also consider the Moreau-Yosida regularization method for control constraints and efficient preconditioners for this technique. Numerical results illustrate the competitiveness of these approaches. © 2011 John Wiley & Sons, Ltd.
A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.
1980-01-01
ifferent alg~ Ir th 9’he generalizec reI.Ze gradie nt, ro nding and neig orho searc57 Dr’. ( RG/R/I ; the integer gradient, steepest escent wih penalty...Real Analysis, John Wiley and Sons , 1964 [3) Beightler, C. S. and Philips, D. T., Applied Geometric Programming, John Wiley and Sons , 1976 [41...Soosaar, K., "Discrete Variables in Structural Optimization", in Optimum Structural Design, ed. Gallagher, R. H. and Zienkiewicz, 0. C., John Wiley and Sons
The International Solar Polar Mission - A problem in constrained optimization
Sweetser, T. H., III; Parmenter, M. E.; Pojman, J. L.
1981-01-01
The International Solar Polar Mission is sponsored jointly by NASA and the European Space Agency to study the sun and the solar environment from a new vantage point. Trajectories far out of the ecliptic plane are achieved by a gravity assist from Jupiter which sends the spacecraft back over the poles of the sun. The process for optimizing these trajectories is described. From the point of view of trajectory design, performance is measured by the time spent at high heliographic latitudes, but many trajectory constraints must be met to ensure spacecraft integrity and good scientific return. The design problem is tractable by closely approximating integrated trajectories with specially calibrated conics. Then the optimum trajectory is found primarily by graphical methods, which were easy to develop and use and are highly adaptable to changes in the plan of the mission.
A hybrid of genetic algorithm and Fletcher-Reeves for bound constrained optimization problems
Directory of Open Access Journals (Sweden)
Asoke Kumar Bhunia
2015-04-01
Full Text Available In this paper a hybrid algorithm for solving bound constrained optimization problems having continuously differentiable objective functions using Fletcher Reeves method and advanced Genetic Algorithm (GA have been proposed. In this approach, GA with advanced operators has been applied for computing the step length in the feasible direction in each iteration of Fletcher Reeves method. Then this idea has been extended to a set of multi-point approximations instead of single point approximation to avoid the convergence of the existing method at local optimum and a new method, called population based Fletcher Reeves method, has been proposed to find the global or nearer to global optimum. Finally to study the performance of the proposed method, several multi-dimensional standard test functions having continuous partial derivatives have been solved. The results have been compared with the same of recently developed hybrid algorithm with respect to different comparative factors.
An Adaptive Primal-Dual Subgradient Algorithm for Online Distributed Constrained Optimization.
Yuan, Deming; Ho, Daniel W C; Jiang, Guo-Ping
2017-10-05
In this paper, we consider the problem of solving distributed constrained optimization over a multiagent network that consists of multiple interacting nodes in online setting, where the objective functions of nodes are time-varying and the constraint set is characterized by an inequality. Through introducing a regularized convex-concave function, we present a consensus-based adaptive primal-dual subgradient algorithm that removes the need for knowing the total number of iterations T in advance. We show that the proposed algorithm attains an $O (T1/2 + c) [where c∈(0,1/2)] regret bound and an O (T1 - c/2) bound on the violation of constraints; in addition, we show an improvement to an $O (Tc) regret bound when the objective functions are strongly convex. The proposed algorithm allows a novel tradeoffs between the regret and the violation of constraints. Finally, a numerical example is provided to illustrate the effectiveness of the algorithm.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open
Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; Chen, Guo; He, Xing
2016-02-01
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.
Alonso Mora, J.; Baker, Stuart; Rus, Daniela
2017-01-01
We present a constrained optimization method for multi-robot formation control in dynamic environments, where the robots adjust the parameters of the formation, such as size and three-dimensional orientation, to avoid collisions with static and moving obstacles, and to make progress towards their
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Farouq, S.; Neytcheva, M.
2017-01-01
Roč. 74, č. 1 (2017), s. 19-37 ISSN 1017-1398 Institutional support: RVO:68145535 Keywords : PDE-constrained optimization problems * finite elements * iterative solution methods * preconditioning Subject RIV: BA - General Mathematics Impact factor: 1.241, year: 2016 https://link.springer.com/article/10.1007%2Fs11075-016-0136-5
Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods
DEFF Research Database (Denmark)
Capolei, Andrea; Jørgensen, John Bagterp
2012-01-01
In this paper, we describe a novel numerical algorithm for solution of constrained optimal control problems of the Bolza type for stiff and/or unstable systems. The numerical algorithm combines explicit singly diagonally implicit Runge-Kutta (ESDIRK) integration methods with a multiple shooting...
Stability Constrained Efficiency Optimization for Droop Controlled DC-DC Conversion System
DEFF Research Database (Denmark)
Meng, Lexuan; Dragicevic, Tomislav; Guerrero, Josep M.
2013-01-01
implementing tertiary regulation. Moreover, system dynamic is affected when shifting VRs. Therefore, the stability is considered in optimization by constraining the eigenvalues arising from dynamic state space model of the system. Genetic algorithm is used in searching for global efficiency optimum while...
Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization
Reyes, Juan Carlos De los
2013-11-01
We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.
Directory of Open Access Journals (Sweden)
R. Venkata Rao
2016-01-01
Full Text Available The teaching-learning-based optimization (TLBO algorithm is finding a large number of applications in different fields of engineering and science since its introduction in 2011. The major applications are found in electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics, chemistry, biotechnology and economics. This paper presents a review of applications of TLBO algorithm and a tutorial for solving the unconstrained and constrained optimization problems. The tutorial is expected to be useful to the beginners.
A variant constrained genetic algorithm for solving conditional nonlinear optimal perturbations
Zheng, Qin; Sha, Jianxin; Shu, Hang; Lu, Xiaoqing
2014-01-01
A variant constrained genetic algorithm (VCGA) for effective tracking of conditional nonlinear optimal perturbations (CNOPs) is presented. Compared with traditional constraint handling methods, the treatment of the constraint condition in VCGA is relatively easy to implement. Moreover, it does not require adjustments to indefinite parameters. Using a hybrid crossover operator and the newly developed multi-ply mutation operator, VCGA improves the performance of GAs. To demonstrate the capability of VCGA to catch CNOPS in non-smooth cases, a partial differential equation, which has "onoff" switches in its forcing term, is employed as the nonlinear model. To search global CNOPs of the nonlinear model, numerical experiments using VCGA, the traditional gradient descent algorithm based on the adjoint method (ADJ), and a GA using tournament selection operation and the niching technique (GA-DEB) were performed. The results with various initial reference states showed that, in smooth cases, all three optimization methods are able to catch global CNOPs. Nevertheless, in non-smooth situations, a large proportion of CNOPs captured by the ADJ are local. Compared with ADJ, the performance of GA-DEB shows considerable improvement, but it is far below VCGA. Further, the impacts of population sizes on both VCGA and GA-DEB were investigated. The results were used to estimate the computation time of VCGA and GA-DEB in obtaining CNOPs. The computational costs for VCGA, GA-DEB and ADJ to catch CNOPs of the nonlinear model are also compared.
Colorimetric characterization of LCD based on constrained least squares
LI, Tong; Xie, Kai; Wang, Qiaojie; Yao, Luyang
2017-01-01
In order to improve the accuracy of colorimetric characterization of liquid crystal display, tone matrix model in color management modeling of display characterization is established by using constrained least squares for quadratic polynomial fitting, and find the relationship between the RGB color space to CIEXYZ color space; 51 sets of training samples were collected to solve the parameters, and the accuracy of color space mapping model was verified by 100 groups of random verification samples. The experimental results showed that, with the constrained least square method, the accuracy of color mapping was high, the maximum color difference of this model is 3.8895, the average color difference is 1.6689, which prove that the method has better optimization effect on the colorimetric characterization of liquid crystal display.
Barbarosou, Maria P; Maratos, Nicholas G
2008-10-01
In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t --> infinity. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate.
Zhang, Songchuan; Xia, Youshen; Wang, Jun
2015-12-01
In this paper, we present a complex-valued projection neural network for solving constrained convex optimization problems of real functions with complex variables, as an extension of real-valued projection neural networks. Theoretically, by developing results on complex-valued optimization techniques, we prove that the complex-valued projection neural network is globally stable and convergent to the optimal solution. Obtained results are completely established in the complex domain and thus significantly generalize existing results of the real-valued projection neural networks. Numerical simulations are presented to confirm the obtained results and effectiveness of the proposed complex-valued projection neural network.
SmartFix: Indoor Locating Optimization Algorithm for Energy-Constrained Wearable Devices
Directory of Open Access Journals (Sweden)
Xiaoliang Wang
2017-01-01
Full Text Available Indoor localization technology based on Wi-Fi has long been a hot research topic in the past decade. Despite numerous solutions, new challenges have arisen along with the trend of smart home and wearable computing. For example, power efficiency needs to be significantly improved for resource-constrained wearable devices, such as smart watch and wristband. For a Wi-Fi-based locating system, most of the energy consumption can be attributed to real-time radio scan; however, simply reducing radio data collection will cause a serious loss of locating accuracy because of unstable Wi-Fi signals. In this paper, we present SmartFix, an optimization algorithm for indoor locating based on Wi-Fi RSS. SmartFix utilizes user motion features, extracts characteristic value from history trajectory, and corrects deviation caused by unstable Wi-Fi signals. We implemented a prototype of SmartFix both on Moto 360 2nd-generation Smartwatch and on HTC One Smartphone. We conducted experiments both in a large open area and in an office hall. Experiment results demonstrate that average locating error is less than 2 meters for more than 80% cases, and energy consumption is only 30% of Wi-Fi fingerprinting method under the same experiment circumstances.
Optimization of constrained layer damping for strain energy minimization of vibrating pads
Directory of Open Access Journals (Sweden)
Supachai Lakkam1
2012-04-01
Full Text Available An optimization study for brake squeals aims to minimize the strain energy of vibrating pads with constrained layerdamping. To achieve this, using finite element method and experiments were operated and assumed-coupling mode methodwas used to solve it. The integrated global strain energy of the pad over a frequency range of interesting mode was calculated.Parametric studies were then performed to identify those dominant parameters on the vibration response of the damped pad.Moreover, the proposed methodology was employed to search for the optimum of the position/geometry of the constrainedlayer damping patch. Optimal solutions are given and discussed for different cases where the strain energy of the pad over afrequency range is covering the first bending mode and with the inclusion of the restriction of minimum damping materialutilization. As a result, the integrated strain energy is then performed to identify and optimize the position and geometry of thedamping shim. The optimization of the constrained layer damping for strain energy minimization of vibrating pads depend onthe position of the shape of the damping patch. These data can guide to specify the position of the constrained layer dampingpatch under pressure conditions.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
A comparative study on optimization methods for the constrained nonlinear programming problems
Directory of Open Access Journals (Sweden)
Yeniay Ozgur
2005-01-01
Full Text Available Constrained nonlinear programming problems often arise in many engineering applications. The most well-known optimization methods for solving these problems are sequential quadratic programming methods and generalized reduced gradient methods. This study compares the performance of these methods with the genetic algorithms which gained popularity in recent years due to advantages in speed and robustness. We present a comparative study that is performed on fifteen test problems selected from the literature.
A first-order multigrid method for bound-constrained convex optimization
Czech Academy of Sciences Publication Activity Database
Kočvara, Michal; Mohammed, S.
2016-01-01
Roč. 31, č. 3 (2016), s. 622-644 ISSN 1055-6788 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : bound-constrained optimization * multigrid methods * linear complementarity problems Subject RIV: BA - General Mathematics Impact factor: 1.023, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0460326.pdf
A Bi-Projection Neural Network for Solving Constrained Quadratic Optimization Problems.
Xia, Youshen; Wang, Jun
2016-02-01
In this paper, a bi-projection neural network for solving a class of constrained quadratic optimization problems is proposed. It is proved that the proposed neural network is globally stable in the sense of Lyapunov, and the output trajectory of the proposed neural network will converge globally to an optimal solution. Compared with existing projection neural networks (PNNs), the proposed neural network has a very small model size owing to its bi-projection structure. Furthermore, an application to data fusion shows that the proposed neural network is very effective. Numerical results demonstrate that the proposed neural network is much faster than the existing PNNs.
Roselyn, J. Preetha; Devaraj, D.; Dash, Subhransu Sekhar
2013-11-01
Voltage stability is an important issue in the planning and operation of deregulated power systems. The voltage stability problems is a most challenging one for the system operators in deregulated power systems because of the intense use of transmission line capabilities and poor regulation in market environment. This article addresses the congestion management problem avoiding offline transmission capacity limits related to voltage stability by considering Voltage Security Constrained Optimal Power Flow (VSCOPF) problem in deregulated environment. This article presents the application of Multi Objective Differential Evolution (MODE) algorithm to solve the VSCOPF problem in new competitive power systems. The maximum of L-index of the load buses is taken as the indicator of voltage stability and is incorporated in the Optimal Power Flow (OPF) problem. The proposed method in hybrid power market which also gives solutions to voltage stability problems by considering the generation rescheduling cost and load shedding cost which relieves the congestion problem in deregulated environment. The buses for load shedding are selected based on the minimum eigen value of Jacobian with respect to the load shed. In the proposed approach, real power settings of generators in base case and contingency cases, generator bus voltage magnitudes, real and reactive power demands of selected load buses using sensitivity analysis are taken as the control variables and are represented as the combination of floating point numbers and integers. DE/randSF/1/bin strategy scheme of differential evolution with self-tuned parameter which employs binomial crossover and difference vector based mutation is used for the VSCOPF problem. A fuzzy based mechanism is employed to get the best compromise solution from the pareto front to aid the decision maker. The proposed VSCOPF planning model is implemented on IEEE 30-bus system, IEEE 57 bus practical system and IEEE 118 bus system. The pareto optimal
On the Optimal Identification of Tag Sets in Time-Constrained RFID Configurations
Directory of Open Access Journals (Sweden)
Juan Manuel Pérez-Mañogil
2011-03-01
Full Text Available In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Evaluating potentialities and constrains of Problem Based Learning curriculum
DEFF Research Database (Denmark)
Guerra, Aida
2013-01-01
encloses three methodological approaches to investigate three interrelated research questions. Phase one, a literature review; aims develop a theoretical and analytical framework. The second phase aims to investigate examples of practices that combine PBL and Education for Sustainable Development (ESD......This paper presents a research design to evaluate Problem Based Learning (PBL) curriculum potentialities and constrains for future changes. PBL literature lacks examples of how to evaluate and analyse established PBL learning environments to address new challenges posed. The research design...
Hadroproduction experiments to constrain accelerator-based neutrino fluxes
Zambelli, Laura
2017-09-01
The precise knowledge of (anti-)neutrino fluxes is one of the largest limitation in accelerator-based neutrino experiments. The main limitations arise from the poorly known production properties of neutrino parents in hadron-nucleus interactions. Strategies used by neutrino experiment to constrain their fluxes using external hadroproduction data will be described and illustrated with an example of a tight collaboration between T2K and NA61/SHINE experiments. This enabled a reduction of the T2K neutrino flux uncertainty from ∼25% (without external constraints) down to ∼10%. On-going developments to further constrain the T2K (anti-)neutrino flux are discussed and recent results from NA61/SHINE are reviewed. As the next-generation long baseline experiments aim for a neutrino flux uncertainty at a level of a few percent, the future data-taking plans of NA61/SHINE are discussed.
Dynamic optimization and its relation to classical and quantum constrained systems
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
Directory of Open Access Journals (Sweden)
Zhanpeng Fang
2015-01-01
Full Text Available A topology optimization method is proposed to minimize the resonant response of plates with constrained layer damping (CLD treatment under specified broadband harmonic excitations. The topology optimization problem is formulated and the square of displacement resonant response in frequency domain at the specified point is considered as the objective function. Two sensitivity analysis methods are investigated and discussed. The derivative of modal damp ratio is not considered in the conventional sensitivity analysis method. An improved sensitivity analysis method considering the derivative of modal damp ratio is developed to improve the computational accuracy of the sensitivity. The evolutionary structural optimization (ESO method is used to search the optimal layout of CLD material on plates. Numerical examples and experimental results show that the optimal layout of CLD treatment on the plate from the proposed topology optimization using the conventional sensitivity analysis or the improved sensitivity analysis can reduce the displacement resonant response. However, the optimization method using the improved sensitivity analysis can produce a higher modal damping ratio than that using the conventional sensitivity analysis and develop a smaller displacement resonant response.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
Energy Technology Data Exchange (ETDEWEB)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
2017-09-01
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrary distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.
Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem
Directory of Open Access Journals (Sweden)
Sengupta Raghu Nandan
2017-02-01
Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
Energy Technology Data Exchange (ETDEWEB)
Sen, Satyabrata [ORNL
2014-01-01
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratio (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.
Integrated radar-photometry sensor based on constrained optical flow
Fablet, Youenn; Agam, Gady; Cohen, Paul
2000-06-01
Robotic teleoperation is a major research area with numerous applications. Efficient teleoperation, however, greatly depends on the provided sensory information. In this paper, an integrated radar- photometry sensor is presented. The developed sensor relies on the strengths of the two main modalties: robust radar-based range data, and high resolution dynamic photometric imaging. While radar data has low resolution and depth from motion in photometric images is susceptible to poor visibility conditions, the integrated sensor compensates for the flaws of the individual components. The integration of the two modalities is achieved by us ing the radar based range data in order to constrain the optical flow estimation, and fusing the resulting depth maps. The optical flow computation is constrained by a model flow field based upon the radar data, by using a rigidity constraint, and by incorporating edge information into the optical flow estimation. The data fusion is based upon a confidence estimation of the image based depth computation. Results with simulated data demonstrate the good potential of the approach.
Jędrzejowicz, Piotr; Kacprzyk, Janusz
2013-01-01
This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.
On meeting capital requirements with a chance-constrained optimization model
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed ...
Constrained time-optimal control of double-integrator system and its application in MPC
Fehér, Marek; Straka, Ondřej; Šmídl, Václav
2017-01-01
The paper deals with the design of a time-optimal controller for systems subject to both state and control constraints. The focus is laid on a double-integrator system, for which the time-to-go function is calculated. The function is then used as a part of a model predictive control criterion where it represents the long-horizon part. The designed model predictive control algorithm is then used in a constrained control problem of permanent magnet synchronous motor model, which behavior can be approximated by a double integrator model. Accomplishments of the control goals are illustrated in a numerical example.
GA based CNC turning center exploitation process parameters optimization
Directory of Open Access Journals (Sweden)
Z. Car
2009-01-01
Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.
Directory of Open Access Journals (Sweden)
Jui-Yu Wu
2012-01-01
Full Text Available This work presents a hybrid real-coded genetic algorithm with a particle swarm optimization (RGA-PSO algorithm and a hybrid artificial immune algorithm with a PSO (AIA-PSO algorithm for solving 13 constrained global optimization (CGO problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGA-PSO and AIA-PSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGA-PSO and AIA-PSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.
Multi-Constrained Optimal Control of 3D Robotic Arm Manipulators
Trivailo, Pavel M.; Fujii, Hironori A.; Kojima, Hirohisa; Watanabe, Takeo
This paper presents a generic method for optimal motion planning for three-dimensional 3-DOF multi-link robotic manipulators. We consider the operation of the manipulator systems, which involve constrained payload transportation/ capture/release, which is a subject to the minimization of the user-defined objective function, enabling for example minimization of the time of the transfer and/or actuation efforts. It should be stressed out that the task is solved in the presence of arbitrary multiple additional constraints. The solutions of the associated nonlinear differential equations of motion are obtained numerically using the direct transcription method. The direct method seeks to transform the continuous optimal control problem into a discrete mathematical programming problem, which in turn is solved using a non-linear programming algorithm. By discretizing the state and control variables at a series of nodes, the integration of the dynamical equations of motion is not required. The Chebyshev pseudospectral method, due to its high accuracy and fast computation times, was chosen as the direct optimization method to be employed to solve the problem. To illustrate the capabilities of the methodology, maneuvering of RRR 3D robot manipulators were considered in detail. Their optimal operations were simulated for the manipulators, binded to move their effectors along the specified 2D plane and 3D spherical and cylindrical surfaces (imitating for example, welding, tooling or scanning robots).
Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems
Directory of Open Access Journals (Sweden)
Man Hong
2013-01-01
Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.
A subgradient approach for constrained binary optimization via quantum adiabatic evolution
Karimi, Sahar; Ronagh, Pooya
2017-08-01
Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.
Directory of Open Access Journals (Sweden)
Yakai Xu
2017-01-01
Full Text Available Dynamic stiffness and damping of the headstock, which is a critical component of precision horizontal machining center, are two main factors that influence machining accuracy and surface finish quality. Constrained Layer Damping (CLD structure is proved to be effective in raising damping capacity for the thin plate and shell structures. In this paper, one kind of high damping material is utilized on the headstock to improve damping capacity. The dynamic characteristic of the hybrid headstock is investigated analytically and experimentally. The results demonstrate that the resonant response amplitudes of the headstock with damping material can decrease significantly compared to original cast structure. To obtain the optimal configuration of damping material, a topology optimization method based on the Evolutionary Structural Optimization (ESO is implemented. Modal Strain Energy (MSE method is employed to analyze the damping and to derive the sensitivity of the modal loss factor. The optimization results indicate that the added weight of damping material decreases by 50%; meanwhile the first two orders of modal loss factor decrease by less than 23.5% compared to the original structure.
Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco
2017-10-01
Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.
Directory of Open Access Journals (Sweden)
Mohammad-Reza Namazi-Rad
Full Text Available To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator.
Ma, Jun; Chen, Si-Lu; Kamaldin, Nazir; Teo, Chek Sing; Tay, Arthur; Mamun, Abdullah Al; Tan, Kok Kiong
2017-11-01
The biaxial gantry is widely used in many industrial processes that require high precision Cartesian motion. The conventional rigid-link version suffers from breaking down of joints if any de-synchronization between the two carriages occurs. To prevent above potential risk, a flexure-linked biaxial gantry is designed to allow a small rotation angle of the cross-arm. Nevertheless, the chattering of control signals and inappropriate design of the flexure joint will possibly induce resonant modes of the end-effector. Thus, in this work, the design requirements in terms of tracking accuracy, biaxial synchronization, and resonant mode suppression are achieved by integrated optimization of the stiffness of flexures and PID controller parameters for a class of point-to-point reference trajectories with same dynamics but different steps. From here, an H 2 optimization problem with defined constraints is formulated, and an efficient iterative solver is proposed by hybridizing direct computation of constrained projection gradient and line search of optimal step. Comparative experimental results obtained on the testbed are presented to verify the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Optimal Load and Stiffness for Displacement-Constrained Vibration Energy Harvesters
Halvorsen, Einar
2016-01-01
The power electronic interface to a vibration energy harvester not only provides ac-dc conversion, but can also set the electrical damping to maximize output power under displacement-constrained operation. This is commonly exploited for linear two-port harvesters by synchronous switching to realize a Coulomb-damped resonant generator, but has not been fully explored when the harvester is asynchronously switched to emulate a resistive load. In order to understand the potential of such an approach, the optimal values of load resistance and other control parameters need to be known. In this paper we determine analytically the optimal load and stiffness of a harmonically driven two-port harvester with displacement constraints. For weak-coupling devices, we do not find any benefit of load and stiffness adjustment beyond maintaining a saturated power level. For strong coupling we find that the power can be optimized to agree with the velocity damped generator beyond the first critical force for displacement-constra...
Pseudo-time methods for constrained optimization problems governed by PDE
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
Analysis of Constrained Optimization Variants of the Map-Seeking Circuit Algorithm
Energy Technology Data Exchange (ETDEWEB)
S.R. Harker; C.R. Vogel; T. Gedeon
2005-09-05
The map-seeking circuit algorithm (MSC) was developed by Arathorn to efficiently solve the combinatorial problem of correspondence maximization, which arises in applications like computer vision, motion estimation, image matching, and automatic speech recognition [D. W. Arathorn, Map-Seeking Circuits in Visual Cognition: A Computational Mechanism for Biological and Machine Vision, Stanford University Press, 2002]. Given an input image, a template image, and a discrete set of transformations, the goal is to find a composition of transformations which gives the best fit between the transformed input and the template. We imbed the associated combinatorial search problem within a continuous framework by using superposition, and we analyze a resulting constrained optimization problem. We present several numerical schemes to compute local solutions, and we compare their performance on a pair of test problems: an image matching problem and the challenging problem of automatically solving a Rubik's cube.
Tran, Giang; Shi, Yonggang
2013-01-01
Diffusion imaging data from the Human Connectome Project (HCP) provides a great opportunity to map the whole brain white matter connectivity to unprecedented resolution in vivo. In this paper we develop a novel method for accurately reconstruct fiber orientation distribution from cutting-edge diffusion data by solving the spherical deconvolution problem as a constrained convex optimization problem. With a set of adaptively selected constraints, our method allows the use of high order spherical harmonics to reliably resolve crossing fibers with small separation angles. In our experiments, we demonstrate on simulated data that our algorithm outperforms a popular spherical deconvolution method in resolving fiber crossings. We also successfully applied our method to the multi-shell and diffusion spectrum imaging (DSI) data from HCP to demonstrate its ability in using state-of-the-art diffusion data to study complicated fiber structures.
Diminishing returns and tradeoffs constrain the laboratory optimization of an enzyme.
Tokuriki, Nobuhiko; Jackson, Colin J; Afriat-Jurnou, Livnat; Wyganowski, Kirsten T; Tang, Renmei; Tawfik, Dan S
2012-01-01
Optimization processes, such as evolution, are constrained by diminishing returns-the closer the optimum, the smaller the benefit per mutation, and by tradeoffs-improvement of one property at the cost of others. However, the magnitude and molecular basis of these parameters, and their effect on evolutionary transitions, remain unknown. Here we pursue a complete functional transition of an enzyme with a >10(9)-fold change in the enzyme's selectivity using laboratory evolution. We observed strong diminishing returns, with the initial mutations conferring >25-fold higher improvements than later ones, and asymmetric tradeoffs whereby the gain/loss ratio of the new/old activity decreased 400-fold from the beginning of the trajectory to its end. We describe the molecular basis for these phenomena and suggest they have an important role in shaping natural proteins. These findings also suggest that the catalytic efficiency and specificity of many natural enzymes may be far from their optimum.
A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality
Directory of Open Access Journals (Sweden)
Ma W-K
2006-01-01
Full Text Available The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA, received signal strength (RSS, time-difference-of-arrival (TDOA, and angle-of-arrival (AOA are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly. Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
Isocyanide-based multicomponent reactions towards cyclic constrained peptidomimetics
Directory of Open Access Journals (Sweden)
Gijs Koopmanschap
2014-03-01
Full Text Available In the recent past, the design and synthesis of peptide mimics (peptidomimetics has received much attention. This because they have shown in many cases enhanced pharmacological properties over their natural peptide analogues. In particular, the incorporation of cyclic constructs into peptides is of high interest as they reduce the flexibility of the peptide enhancing often affinity for a certain receptor. Moreover, these cyclic mimics force the molecule into a well-defined secondary structure. Constraint structural and conformational features are often found in biological active peptides. For the synthesis of cyclic constrained peptidomimetics usually a sequence of multiple reactions has been applied, which makes it difficult to easily introduce structural diversity necessary for fine tuning the biological activity. A promising approach to tackle this problem is the use of multicomponent reactions (MCRs, because they can introduce both structural diversity and molecular complexity in only one step. Among the MCRs, the isocyanide-based multicomponent reactions (IMCRs are most relevant for the synthesis of peptidomimetics because they provide peptide-like products. However, these IMCRs usually give linear products and in order to obtain cyclic constrained peptidomimetics, the acyclic products have to be cyclized via additional cyclization strategies. This is possible via incorporation of bifunctional substrates into the initial IMCR. Examples of such bifunctional groups are N-protected amino acids, convertible isocyanides or MCR-components that bear an additional alkene, alkyne or azide moiety and can be cyclized via either a deprotection–cyclization strategy, a ring-closing metathesis, a 1,3-dipolar cycloaddition or even via a sequence of multiple multicomponent reactions. The sequential IMCR-cyclization reactions can afford small cyclic peptide mimics (ranging from four- to seven-membered rings, medium-sized cyclic constructs or peptidic
Isocyanide-based multicomponent reactions towards cyclic constrained peptidomimetics.
Koopmanschap, Gijs; Ruijter, Eelco; Orru, Romano Va
2014-01-01
In the recent past, the design and synthesis of peptide mimics (peptidomimetics) has received much attention. This because they have shown in many cases enhanced pharmacological properties over their natural peptide analogues. In particular, the incorporation of cyclic constructs into peptides is of high interest as they reduce the flexibility of the peptide enhancing often affinity for a certain receptor. Moreover, these cyclic mimics force the molecule into a well-defined secondary structure. Constraint structural and conformational features are often found in biological active peptides. For the synthesis of cyclic constrained peptidomimetics usually a sequence of multiple reactions has been applied, which makes it difficult to easily introduce structural diversity necessary for fine tuning the biological activity. A promising approach to tackle this problem is the use of multicomponent reactions (MCRs), because they can introduce both structural diversity and molecular complexity in only one step. Among the MCRs, the isocyanide-based multicomponent reactions (IMCRs) are most relevant for the synthesis of peptidomimetics because they provide peptide-like products. However, these IMCRs usually give linear products and in order to obtain cyclic constrained peptidomimetics, the acyclic products have to be cyclized via additional cyclization strategies. This is possible via incorporation of bifunctional substrates into the initial IMCR. Examples of such bifunctional groups are N-protected amino acids, convertible isocyanides or MCR-components that bear an additional alkene, alkyne or azide moiety and can be cyclized via either a deprotection-cyclization strategy, a ring-closing metathesis, a 1,3-dipolar cycloaddition or even via a sequence of multiple multicomponent reactions. The sequential IMCR-cyclization reactions can afford small cyclic peptide mimics (ranging from four- to seven-membered rings), medium-sized cyclic constructs or peptidic macrocycles (>12
DEFF Research Database (Denmark)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten Hartvig
2016-01-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads....... In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested...... to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt), along with some of the responses of the system, are used to investigate the controller performance and formulate...
Directory of Open Access Journals (Sweden)
Longfei He
2014-01-01
Full Text Available We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optimization algorithm to obtain joint optimal production quantities combination for maximizing overall profit under regulatory policies, respectively. Furthermore, numerical studies by featuring exponentially distributed demand compare systemwide performances in various scenarios. We build the “carbon emission elasticity of profit (CEEP” index as a metric to evaluate the impact of regulatory policies on both chainwide emissions and profit. Our results manifest that by facilitating the mandatory emission cap in proper installation within the network one can balance well effective emission reduction and associated acceptable profit loss. The outcome that CEEP index when implementing Carbon emission tax is elastic implies that the scale of profit loss is greater than that of emission reduction, which shows that this policy is less effective than mandatory cap from industry standpoint at least.
Smoothing neural network for constrained non-Lipschitz optimization with applications.
Bian, Wei; Chen, Xiaojun
2012-03-01
In this paper, a smoothing neural network (SNN) is proposed for a class of constrained non-Lipschitz optimization problems, where the objective function is the sum of a nonsmooth, nonconvex function, and a non-Lipschitz function, and the feasible set is a closed convex subset of . Using the smoothing approximate techniques, the proposed neural network is modeled by a differential equation, which can be implemented easily. Under the level bounded condition on the objective function in the feasible set, we prove the global existence and uniform boundedness of the solutions of the SNN with any initial point in the feasible set. The uniqueness of the solution of the SNN is provided under the Lipschitz property of smoothing functions. We show that any accumulation point of the solutions of the SNN is a stationary point of the optimization problem. Numerical results including image restoration, blind source separation, variable selection, and minimizing condition number are presented to illustrate the theoretical results and show the efficiency of the SNN. Comparisons with some existing algorithms show the advantages of the SNN.
Tongue Images Classification Based on Constrained High Dispersal Network
Directory of Open Access Journals (Sweden)
Dan Meng
2017-01-01
Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.
Morgenthaler, George; Khatib, Nader; Kim, Byoungsoo
with information to improve their crop's vigor has been a major topic of interest. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, the efficiency of farming must increase to meet future food requirements and to make farming a sustainable occupation for the farmer. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The goal is to increase farm revenue by increasing crop yield and decreasing applications of costly chemical and water treatments. In addition, this methodology will decrease the environmental costs of farming, i.e., reduce air, soil, and water pollution. Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now available. Commercial satellite systems can image (multi-spectral) the Earth with a resolution of approximately 2.5 m. Variable precision dispensing systems using GPS are available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been formulated. Personal computers and internet access are in place in most farm homes and can provide a mechanism to periodically disseminate, e.g. bi-weekly, advice on what quantities of water and chemicals are needed in individual regions of the field. What is missing is a model that fuses the disparate sources of information on the current states of the crop and soil, and the remaining resource levels available with the decisions farmers are required to make. This must be a product that is easy for the farmer to understand and to implement. A "Constrained Optimization Feed-back Control Model" to fill this void will be presented. The objective function of the model will be used to maximize the farmer's profit by increasing yields while decreasing environmental costs and decreasing
Moreenthaler, George W.; Khatib, Nader; Kim, Byoungsoo
2003-08-01
For two decades now, the use of Remote Sensing/Precision Agriculture to improve farm yields while reducing the use of polluting chemicals and the limited water supply has been a major goal. With world population growing exponentially, arable land being consumed by urbanization, and an unfavorable farm economy, farm efficiency must increase to meet future food requirements and to make farming a sustainable, profitable occupation. "Precision Agriculture" refers to a farming methodology that applies nutrients and moisture only where and when they are needed in the field. The real goal is to increase farm profitability by identifying the additional treatments of chemicals and water that increase revenues more than they increase costs and do no exceed pollution standards (constrained optimization). Even though the economic and environmental benefits appear to be great, Remote Sensing/Precision Agriculture has not grown as rapidly as early advocates envisioned. Technology for a successful Remote Sensing/Precision Agriculture system is now in place, but other needed factors have been missing. Commercial satellite systems can now image the Earth (multi-spectrally) with a resolution as fine as 2.5 m. Precision variable dispensing systems using GPS are now available and affordable. Crop models that predict yield as a function of soil, chemical, and irrigation parameter levels have been developed. Personal computers and internet access are now in place in most farm homes and can provide a mechanism for periodically disseminating advice on what quantities of water and chemicals are needed in specific regions of each field. Several processes have been selected that fuse the disparate sources of information on the current and historic states of the crop and soil, and the remaining resource levels available, with the critical decisions that farmers are required to make. These are done in a way that is easy for the farmer to understand and profitable to implement. A "Constrained
Automatic analog IC sizing and optimization constrained with PVT corners and layout effects
Lourenço, Nuno; Horta, Nuno
2017-01-01
This book introduces readers to a variety of tools for automatic analog integrated circuit (IC) sizing and optimization. The authors provide a historical perspective on the early methods proposed to tackle automatic analog circuit sizing, with emphasis on the methodologies to size and optimize the circuit, and on the methodologies to estimate the circuit’s performance. The discussion also includes robust circuit design and optimization and the most recent advances in layout-aware analog sizing approaches. The authors describe a methodology for an automatic flow for analog IC design, including details of the inputs and interfaces, multi-objective optimization techniques, and the enhancements made in the base implementation by using machine leaning techniques. The Gradient model is discussed in detail, along with the methods to include layout effects in the circuit sizing. The concepts and algorithms of all the modules are thoroughly described, enabling readers to reproduce the methodologies, improve the qual...
Hortos, William S.
2003-07-01
Mobile ad hoc networking (MANET) supports self-organizing, mobile infrastructures and enables an autonomous network of mobile nodes that can operate without a wired backbone. Ad hoc networks are characterized by multihop, wireless connectivity via packet radios and by the need for efficient dynamic protocols. All routers are mobile and can establish connectivity with other nodes only when they are within transmission range. Importantly, ad hoc wireless nodes are resource-constrained, having limited processing, memory, and battery capacity. Delivery of high quality-ofservice (QoS), real-time multimedia services from Internet-based applications over a MANET is a challenge not yet achieved by proposed Internet Engineering Task Force (IETF) ad hoc network protocols in terms of standard performance metrics such as end-to-end throughput, packet error rate, and delay. In the distributed operations of route discovery and maintenance, strong interaction occurs across MANET protocol layers, in particular, the physical, media access control (MAC), network, and application layers. The QoS requirements are specified for the service classes by the application layer. The cross-layer design must also satisfy the battery-limited energy constraints, by minimizing the distributed power consumption at the nodes and of selected routes. Interactions across the layers are modeled in terms of the set of concatenated design parameters including associated energy costs. Functional dependencies of the QoS metrics are described in terms of the concatenated control parameters. New cross-layer designs are sought that optimize layer interdependencies to achieve the "best" QoS available in an energy-constrained, time-varying network. The protocol design, based on a reactive MANET protocol, adapts the provisioned QoS to dynamic network conditions and residual energy capacities. The cross-layer optimization is based on stochastic dynamic programming conditions derived from time-dependent models of
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P.; Montgomery, Claire A.; McKelvey, Kevin; Kendall, Katherine; Graves, Tabitha A.; Bernstein, Richard; Schwartz, Michael K.
2017-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs
Risk Based Optimal Fatigue Testing
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.
1992-01-01
Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value of the maxi......Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...
A Dynamic Economic Dispatch Model Incorporating Wind Power Based on Chance Constrained Programming
Directory of Open Access Journals (Sweden)
Wushan Cheng
2014-12-01
Full Text Available In order to maintain the stability and security of the power system, the uncertainty and intermittency of wind power must be taken into account in economic dispatch (ED problems. In this paper, a dynamic economic dispatch (DED model based on chance constrained programming is presented and an improved particle swarm optimization (PSO approach is proposed to solve the problem. Wind power is regarded as a random variable and is included in the chance constraint. New formulation of up and down spinning reserve constraints are presented under expectation meaning. The improved PSO algorithm combines a feasible region adjustment strategy with a hill climbing search operation based on the basic PSO. Simulations are performed under three distinct test systems with different generators. Results show that both the proposed DED model and the improved PSO approach are effective.
Formulation of image fusion as a constrained least squares optimization problem.
Dwork, Nicholas; Lasry, Eric M; Pauly, John M; Balbás, Jorge
2017-01-01
Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem.
Directory of Open Access Journals (Sweden)
Xuemei Sun
2015-01-01
Full Text Available Degree constrained minimum spanning tree (DCMST refers to constructing a spanning tree of minimum weight in a complete graph with weights on edges while the degree of each node in the spanning tree is no more than d (d ≥ 2. The paper proposes an improved multicolony ant algorithm for degree constrained minimum spanning tree searching which enables independent search for optimal solutions among various colonies and achieving information exchanges between different colonies by information entropy. Local optimal algorithm is introduced to improve constructed spanning tree. Meanwhile, algorithm strategies in dynamic ant, random perturbations ant colony, and max-min ant system are adapted in this paper to optimize the proposed algorithm. Finally, multiple groups of experimental data show the superiority of the improved algorithm in solving the problems of degree constrained minimum spanning tree.
A Bi-Level Optimization Model for Grouping Constrained Storage Location Assignment Problems.
Xie, Jing; Mei, Yi; Ernst, Andreas T; Li, Xiaodong; Song, Andy
2016-12-23
In this paper, a novel bi-level grouping optimization (BIGO) model is proposed for solving the storage location assignment problem with grouping constraint (SLAP-GC). A major challenge in this problem is the grouping constraint which restricts the number of groups each product can have and the locations of items in the same group. In SLAP-GC, the problem consists of two subproblems, one is how to group the items, and the other one is how to assign the groups to locations. It is an arduous task to solve the two subproblems simultaneously. To overcome this difficulty, we propose a BIGO. BIGO optimizes item grouping in the upper level, and uses the lower-level optimization to evaluate each item grouping. Sophisticated fitness evaluation and search operators are designed for both upper and lower level optimization so that the feasibility of solutions can be guaranteed, and the search can focus on promising areas in the search space. Based on the BIGO model, a multistart random search method and a tabu algorithm are proposed. The experimental results on the real-world dataset validate the efficacy of the BIGO model and the advantage of the tabu method over the random search method.
Energy Technology Data Exchange (ETDEWEB)
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Energy-Constrained Quality Optimization for Secure Image Transmission in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Wei Wang
2007-01-01
Full Text Available Resource allocation for multimedia selective encryption and energy efficient transmission has not been fully investigated in literature for wireless sensor networks (WSNs. In this article, we propose a new cross-layer approach to optimize selectively encrypted image transmission quality in WSNs with strict energy constraint. A new selective image encryption approach favorable for unequal error protection (UEP is proposed, which reduces encryption overhead considerably by controlling the structure of image bitstreams. Also, a novel cross-layer UEP scheme based on cipher-plain-text diversity is studied. In this UEP scheme, resources are unequally and optimally allocated in the encrypted bitstream structure, including data position information and magnitude value information. Simulation studies demonstrate that the proposed approach can simultaneously achieve improved image quality and assured energy efficiency with secure transmissions over WSNs.
Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David
1997-01-01
An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-04-28
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme.
Directory of Open Access Journals (Sweden)
Arnaut Dierck
2015-01-01
Full Text Available Designing textile antennas for real-life applications requires a design strategy that is able to produce antennas that are optimized over a wide bandwidth for often conflicting characteristics, such as impedance matching, axial ratio, efficiency, and gain, and, moreover, that is able to account for the variations that apply for the characteristics of the unconventional materials used in smart textile systems. In this paper, such a strategy, incorporating a multiobjective constrained Pareto optimization, is presented and applied to the design of a Galileo E6-band antenna with optimal return loss and wide-band axial ratio characteristics. Subsequently, different prototypes of the optimized antenna are fabricated and measured to validate the proposed design strategy.
Isotretinoin Oil-Based Capsule Formulation Optimization
Tsai, Pi-Ju; Huang, Chi-Te; Lee, Chen-Chou; Li, Chi-Lin; Huang, Yaw-Bin; Tsai, Yi-Hung; Wu, Pao-Chu
2013-01-01
The purpose of this study was to develop and optimize an isotretinoin oil-based capsule with specific dissolution pattern. A three-factor-constrained mixture design was used to prepare the systemic model formulations. The independent factors were the components of oil-based capsule including beeswax (X 1), hydrogenated coconut oil (X 2), and soybean oil (X 3). The drug release percentages at 10, 30, 60, and 90 min were selected as responses. The effect of formulation factors including that on responses was inspected by using response surface methodology (RSM). Multiple-response optimization was performed to search for the appropriate formulation with specific release pattern. It was found that the interaction effect of these formulation factors (X 1 X 2, X 1 X 3, and X 2 X 3) showed more potential influence than that of the main factors (X 1, X 2, and X 3). An optimal predicted formulation with Y 10 min, Y 30 min, Y 60 min, and Y 90 min release values of 12.3%, 36.7%, 73.6%, and 92.7% at X 1, X 2, and X 3 of 5.75, 15.37, and 78.88, respectively, was developed. The new formulation was prepared and performed by the dissolution test. The similarity factor f 2 was 54.8, indicating that the dissolution pattern of the new optimized formulation showed equivalence to the predicted profile. PMID:24068886
Vicario, Francesco; Albanese, Antonio; Karamolegkos, Nikolaos; Wang, Dong; Seiver, Adam; Chbat, Nicolas W
2016-04-01
This paper presents a method for breath-by-breath noninvasive estimation of respiratory resistance and elastance in mechanically ventilated patients. For passive patients, well-established approaches exist. However, when patients are breathing spontaneously, taking into account the diaphragmatic effort in the estimation process is still an open challenge. Mechanical ventilators require maneuvers to obtain reliable estimates for respiratory mechanics parameters. Such maneuvers interfere with the desired ventilation pattern to be delivered to the patient. Alternatively, invasive procedures are needed. The method presented in this paper is a noninvasive way requiring only measurements of airway pressure and flow that are routinely available for ventilated patients. It is based on a first-order single-compartment model of the respiratory system, from which a cost function is constructed as the sum of squared errors between model-based airway pressure predictions and actual measurements. Physiological considerations are translated into mathematical constraints that restrict the space of feasible solutions and make the resulting optimization problem strictly convex. Existing quadratic programming techniques are used to efficiently find the minimizing solution, which yields an estimate of the respiratory system resistance and elastance. The method is illustrated via numerical examples and experimental data from animal tests. Results show that taking into account the patient effort consistently improves the estimation of respiratory mechanics. The method is suitable for real-time patient monitoring, providing clinicians with noninvasive measurements that could be used for diagnosis and therapy optimization.
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
DEFF Research Database (Denmark)
Liu, Zhaoxi; Wu, Qiuwei; Oren, Shmuel S.
2017-01-01
This paper presents a distribution locational marginal pricing (DLMP) method through chance constrained mixed-integer programming designed to alleviate the possible congestion in the future distribution network with high penetration of electric vehicles (EVs). In order to represent the stochastic...
Constrained Optimal Stochastic Control of Non-Linear Wave Energy Point Absorbers
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Chen, Jian-Bing; Kramer, Morten
2014-01-01
to extract energy. Constrains are enforced on the control force to prevent large structural stresses in the floater at specific hot spots with the risk of inducing fatigue damage, or because the demanded control force cannot be supplied by the actuator system due to saturation. Further, constraints...
Directory of Open Access Journals (Sweden)
Xing Liu
2014-12-01
Full Text Available Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes.
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2014-12-23
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core "context aware" and multi-core "power-off/wakeup" energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes.
Directory of Open Access Journals (Sweden)
Chocat Rudy
2015-01-01
Full Text Available The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.
DEFF Research Database (Denmark)
Tamas-Selicean, Domitian; Pop, Paul
2011-01-01
In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed......-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks...... slots on each processor and (iv) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real...
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
A unified aggregation and relaxation approach for stress-constrained topology optimization
Verbart, A.; Langelaar, M.; van Keulen, A.
2017-01-01
In this paper, we propose a unified aggregation and relaxation approach for topology optimization with stress constraints. Following this approach, we first reformulate the original optimization problem with a design-dependent set of constraints into an equivalent optimization problem with a fixed
Optimization of a constrained linear monochromator design for neutral atom beams.
Kaltenbacher, Thomas
2016-04-01
A focused ground state, neutral atom beam, exploiting its de Broglie wavelength by means of atom optics, is used for neutral atom microscopy imaging. Employing Fresnel zone plates as a lens for these beams is a well established microscopy technique. To date, even for favorable beam source conditions a minimal focus spot size of slightly below 1μm was reached. This limitation is essentially given by the intrinsic spectral purity of the beam in combination with the chromatic aberration of the diffraction based zone plate. Therefore, it is important to enhance the monochromaticity of the beam, enabling a higher spatial resolution, preferably below 100nm. We propose to increase the monochromaticity of a neutral atom beam by means of a so-called linear monochromator set-up - a Fresnel zone plate in combination with a pinhole aperture - in order to gain more than one order of magnitude in spatial resolution. This configuration is known in X-ray microscopy and has proven to be useful, but has not been applied to neutral atom beams. The main result of this work is optimal design parameters based on models for this linear monochromator set-up followed by a second zone plate for focusing. The optimization was performed for minimizing the focal spot size and maximizing the centre line intensity at the detector position for an atom beam simultaneously. The results presented in this work are for, but not limited to, a neutral helium atom beam. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Nebojsa Bacanin
2014-01-01
portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
Directory of Open Access Journals (Sweden)
Zhenggang Du
2015-03-01
Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also
Zhang, Xiaomin; Ren, Kan; Wan, Minjie; Gu, Guohua; Chen, Qian
2017-12-01
Infrared search and track technology for small target plays an important role in infrared warning and guidance. In view of the tacking randomness and uncertainty caused by background clutter and noise interference, a robust tracking method for infrared small target based on sample constrained particle filtering and sparse representation is proposed in this paper. Firstly, to distinguish the normal region and interference region in target sub-blocks, we introduce a binary support vector, and combine it with the target sparse representation model, after which a particle filtering observation model based on sparse reconstruction error differences between sample targets is developed. Secondly, we utilize saliency extraction to obtain the high frequency area in infrared image, and make it as a priori knowledge of the transition probability model to limit the particle filtering sampling process. Lastly, the tracking result is brought about via target state estimation and the Bayesian posteriori probability calculation. Theoretical analyses and experimental results show that our method can enhance the state estimation ability of stochastic particles, improve the sparse representation adaptabilities for infrared small targets, and optimize the tracking accuracy for infrared small moving targets.
Optimization of constrained multiple-objective reliability problems using evolutionary algorithms
Energy Technology Data Exchange (ETDEWEB)
Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es
2006-09-15
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.
Directory of Open Access Journals (Sweden)
Seyed Hossein Nikokalam-Mozafar
2014-12-01
Full Text Available This paper presents a stochastic bi-objective model for a single-allocation hub covering problem (HCP with the variable capacity and uncertainty parameters. Locating hubs can influence the performance of hub and spoke networks, as a strategic decision. The presented model optimizes two objectives minimizing the total transportation cost and the maximum transportation time from an origin to a destination simultaneously. Then, due to the NP-hardness of the multi-objective chance-constrained HCP, the presented model is solved by a well-known meta-heuristic algorithm, namely multi-objective invasive weed optimization. Additionally, the associated results are compared with a well-known multi-objective evolutionary algorithm, namely non-dominated sorting genetic algorithm. Furthermore, the computational results of the foregoing algorithms are reported in terms four well-known metrics, namely quality, spacing, diversification, and mean ideal distance. Finally, the conclusion is reported.
Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring
Directory of Open Access Journals (Sweden)
Naixue Xiong
2017-01-01
Full Text Available Single-image blind deblurring for imaging sensors in the Internet of Things (IoT is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
A multi-agent technique for contingency constrained optimal power flows
Energy Technology Data Exchange (ETDEWEB)
Talukdar, S.; Ramesh, V.C. (Carnegie Mellon Univ., Pittsburgh, PA (United States). Engineering Design Research Center)
1994-05-01
This paper does three things. First, it proposes that each critical contingency in a power system be represented by a correction time'' (the time required to eliminate the violations produced by the contingency), rather than by a set of hard constraints. Second, it adds these correction times to an optimal power flow and decomposes the resulting problem into a number of smaller optimization problems. Third, it proposes a multiagent technique for solving the smaller problems in parallel. The agents encapsulate traditional optimization algorithms as well as a new algorithm, called the voyager, that generates starting points for the traditional algorithms. All the agents communicate asynchronously, meaning that they can work in parallel without ever interrupting or delaying one another. The resulting scheme has potential for handling power system contingencies and other difficult global optimization problems.
Direct Speed Control of PMSM Drive Using SDRE and Convex Constrained Optimization
Czech Academy of Sciences Publication Activity Database
Šmídl, V.; Janouš, Š.; Adam, Lukáš; Peroutka, Z.
2018-01-01
Roč. 65, č. 1 (2018), s. 532-542 ISSN 1932-4529 Grant - others:GA MŠk(CZ) LO1607 Institutional support: RVO:67985556 Keywords : Velocity control * Optimization * Stators * Voltage control * Predictive control * Optimal control * Rotors Subject RIV: BD - Theory of Information Impact factor: 10.710, year: 2016 http:// library .utia.cas.cz/separaty/2017/AS/smidl-0481225.pdf
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-01-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
Population Set based Optimization Method
Manekar, Y.; Verma, H. K.
2013-09-01
In this paper an population set based optimization method is proposed for solving some benchmark functions and also to solve optimal power flow problem like `combined economic and emission dispatch problem (CEED)' with multiple objective functions. This algorithm has taken into consideration all the equality and inequality constraints. The improvement in system performance is based on reduction in cost of power generation and active power loss. The proposed algorithms have been compared with the other methods like GA, PSO etc reported in the literature. The results are impressive and encouraging. The study results show that the proposed method holds better solutions in CEED problems.
FIR filter-based online jerk-constrained trajectory generation
BESSET, Pierre; BEAREE, Richard
2017-01-01
International audience; In the context of human-robot manipulation interaction for service or industrial robotics, the robot controller must be able to quickly react to unpredictable events in dynamic environments. In this paper, a FIR filter-based trajectory generation methodology is presented, combining the simplicity of the analytic second-order trajectory generation, i.e. acceleration-limited trajectory, with the flexibility and computational efficiency of FIR filtering, to generate on th...
Optimal Coordinated EV Charging with Reactive Power Support in Constrained Distribution Grids
Energy Technology Data Exchange (ETDEWEB)
Paudyal, Sumit; Ceylan, Oğuzhan; Bhattarai, Bishnu P.; Myers, Kurt S.
2017-07-01
Electric vehicle (EV) charging/discharging can take place in any P-Q quadrants, which means EVs could support reactive power to the grid while charging the battery. In controlled charging schemes, distribution system operator (DSO) coordinates with the charging of EV fleets to ensure grid’s operating constraints are not violated. In fact, this refers to DSO setting upper bounds on power limits for EV charging. In this work, we demonstrate that if EVs inject reactive power into the grid while charging, DSO could issue higher upper bounds on the active power limits for the EVs for the same set of grid constraints. We demonstrate the concept in an 33-node test feeder with 1,500 EVs. Case studies show that in constrained distribution grids in coordinated charging, average costs of EV charging could be reduced if the charging takes place in the fourth P-Q quadrant compared to charging with unity power factor.
Security-Constrained Unit Commitment Based on a Realizable Energy Delivery Formulation
Directory of Open Access Journals (Sweden)
Hongyu Wu
2012-01-01
Full Text Available Security-constrained unit commitment (SCUC is an important tool for independent system operators in the day-ahead electric power market. A serious issue arises that the energy realizability of the staircase generation schedules obtained in traditional SCUC cannot be guaranteed. This paper focuses on addressing this issue, and the basic idea is to formulate the power output of thermal units as piecewise-linear function. All individual unit constraints and systemwide constraints are then reformulated. The new SCUC formulation is solved within the Lagrangian relaxation (LR framework, in which a double dynamic programming method is developed to solve individual unit subproblems. Numerical testing is performed for a 6-bus system and an IEEE 118-bus system on Microsoft Visual C# .NET platform. It is shown that the energy realizability of generation schedules obtained from the new formulation is guaranteed. Comparative case study is conducted between LR and mixed integer linear programming (MILP in solving the new formulation. Numerical results show that the near-optimal solution can be obtained efficiently by the proposed LR-based method.
Titan TTCN-3 Based Test Framework for Resource Constrained Systems
Directory of Open Access Journals (Sweden)
Yushev Artem
2016-01-01
Full Text Available Wireless communication systems more and more become part of our daily live. Especially with the Internet of Things (IoT the overall connectivity increases rapidly since everyday objects become part of the global network. For this purpose several new wireless protocols have arisen, whereas 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks can be seen as one of the most important protocols within this sector. Originally designed on top of the IEEE802.15.4 standard it is a subject to various adaptions that will allow to use 6LoWPAN over different technologies; e.g. DECT Ultra Low Energy (ULE. Although this high connectivity offers a lot of new possibilities, there are several requirements and pitfalls coming along with such new systems. With an increasing number of connected devices the interoperability between different providers is one of the biggest challenges, which makes it necessary to verify the functionality and stability of the devices and the network. Therefore testing becomes one of the key components that decides on success or failure of such a system. Although there are several protocol implementations commonly available; e.g., for IoT based systems, there is still a lack of according tools and environments as well as for functional and conformance testing. This article describes the architecture and functioning of the proposed test framework based on Testing and Test Control Notation Version 3 (TTCN-3 for 6LoWPAN over ULE networks.
Rule-based spatial modeling with diffusing, geometrically constrained molecules
Directory of Open Access Journals (Sweden)
Lohel Maiko
2010-06-01
Full Text Available Abstract Background We suggest a new type of modeling approach for the coarse grained, particle-based spatial simulation of combinatorially complex chemical reaction systems. In our approach molecules possess a location in the reactor as well as an orientation and geometry, while the reactions are carried out according to a list of implicitly specified reaction rules. Because the reaction rules can contain patterns for molecules, a combinatorially complex or even infinitely sized reaction network can be defined. For our implementation (based on LAMMPS, we have chosen an already existing formalism (BioNetGen for the implicit specification of the reaction network. This compatibility allows to import existing models easily, i.e., only additional geometry data files have to be provided. Results Our simulations show that the obtained dynamics can be fundamentally different from those simulations that use classical reaction-diffusion approaches like Partial Differential Equations or Gillespie-type spatial stochastic simulation. We show, for example, that the combination of combinatorial complexity and geometric effects leads to the emergence of complex self-assemblies and transportation phenomena happening faster than diffusion (using a model of molecular walkers on microtubules. When the mentioned classical simulation approaches are applied, these aspects of modeled systems cannot be observed without very special treatment. Further more, we show that the geometric information can even change the organizational structure of the reaction system. That is, a set of chemical species that can in principle form a stationary state in a Differential Equation formalism, is potentially unstable when geometry is considered, and vice versa. Conclusions We conclude that our approach provides a new general framework filling a gap in between approaches with no or rigid spatial representation like Partial Differential Equations and specialized coarse-grained spatial
Preconditioners for state-constrained optimal control problems with Moreau-Yosida penalty function
Pearson, John W.
2012-11-21
Optimal control problems with partial differential equations as constraints play an important role in many applications. The inclusion of bound constraints for the state variable poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared with other approaches. In this paper, we develop robust preconditioners for the efficient solution of the Newton steps associated with the fast solution of the Moreau-Yosida regularized problem. Numerical results illustrate the efficiency of our approach. © 2012 John Wiley & Sons, Ltd.
Constraining Binary Asteroid Mass Distributions Based On Mutual Motion
Davis, Alex B.; Scheeres, Daniel J.
2017-06-01
The mutual gravitational potential and torques of binary asteroid systems results in a complex coupling of attitude and orbital motion based on the mass distribution of each body. For a doubly-synchronous binary system observations of the mutual motion can be leveraged to identify and measure the unique mass distributions of each body. By implementing arbitrary shape and order computation of the full two-body problem (F2BP) equilibria we study the influence of asteroid asymmetries on separation and orientation of a doubly-synchronous system. Additionally, simulations of binary systems perturbed from doubly-synchronous behavior are studied to understand the effects of mass distribution perturbations on precession and nutation rates such that unique behaviors can be isolated and used to measure asteroid mass distributions. We apply our investigation to the Trojan binary asteroid system 617 Patroclus and Menoetius (1906 VY), which will be the final flyby target of the recently announced LUCY Discovery mission in March 2033. This binary asteroid system is of particular interest due to the results of a recent stellar occultation study (DPS 46, id.506.09) that suggests the system to be doubly-synchronous and consisting of two-similarly sized oblate ellipsoids, in addition to suggesting the presence mass asymmetries resulting from an impact crater on the southern limb of Menoetius.
Optimization of the box-girder of overhead crane with constrained ...
African Journals Online (AJOL)
haroun
An optimal design for minimum weight crane girder can reduce the manufacturing and operating costs. The box- girder is modeled .... a) All bats use echolocation to sense distance and they also know the difference between food/ prey and barriers in some magical ...... and Automation in Design, vol. 106, pp. 203-208, 1984.
Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems
DEFF Research Database (Denmark)
Izosimov, Viacheslav; Pop, Paul; Eles, Petru
2005-01-01
In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating tr...
Fast Bundle-Level Type Methods for Unconstrained and Ball-Constrained Convex Optimization
2014-12-01
The mosek optimization toolbox for matlab manual . version 6.0 (revision 93). http://www.mosek.com. [18] A. S. Nemirovski and D. Yudin. Problem...Bundle methods in the xxist century: A bird’s-eye view. Pesquisa Operacional , 34(3):647–670, 2014. [25] Yuyuan Ouyang, Yunmei Chen, Guanghui Lan
Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools
Amlie, Thomas T.
2009-01-01
A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…
Optimization-Based Layout Design
Directory of Open Access Journals (Sweden)
K. Abdel-Malek
2005-01-01
Full Text Available The layout problem is of importance to ergonomists, vehicle/cockpit packaging engineers, designers of manufacturing assembly lines, designers concerned with the placement of levers, knobs, controls, etc. in the reachable workspace of a human, and also to users of digital human modeling code, where digital prototyping has become a valuable tool. This paper proposes a hybrid optimization method (gradient-based optimization and simulated annealing to obtain the layout design. We implemented the proposed algorithm for a project at Oral-B Laboratories, where a manufacturing cell involves an operator who handles three objects, some with the left hand, others with the right hand.
Simulation-based optimization parametric optimization techniques and reinforcement learning
Gosavi, Abhijit
2003-01-01
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...
A self-constrained inversion of magnetic data based on correlation method
Sun, Shida; Chen, Chao
2016-12-01
Geologically-constrained inversion is a powerful method for producing geologically reasonable solutions in geophysical exploration problems. But in many cases, except the observed geophysical data to be inverted, the geological information is insufficiently available for improving reliability of recovered models. To deal with these situations, self-constraints extracted from preprocessing observed data have been applied to constrain the inversion. In this paper, we present a self-constrained inversion method based on correlation method. In our approach the correlation results are first obtained by calculating the cross-correlation between theoretical data and horizontal gradients of the observed data. Subsequently, we propose two specific strategies to extract the spatial variation from the correlation results and then translate them into spatial weighting functions. Incorporating the spatial weighting functions into the model objective function, we obtain self-constrained solutions with higher reliability. We presented two synthetic and one field magnetic data example to test the validity. All results demonstrate that the solution from our self-constrained inversion can delineate the geological bodies with clearer boundaries and much more concentrated physical property.
Stress-constrained truss topology optimization problems that can be solved by linear programming
DEFF Research Database (Denmark)
Stolpe, Mathias; Svanberg, Krister
2004-01-01
We consider the problem of simultaneously selecting the material and determining the area of each bar in a truss structure in such a way that the cost of the structure is minimized subject to stress constraints under a single load condition. We show that such problems can be solved by linear...... programming to give the global optimum, and that two different materials are always sufficient in an optimal structure....
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Farouq, S.; Neytcheva, M.
2017-01-01
Roč. 310, January 2017 (2017), s. 5-18 ISSN 0377-0427 R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : optimal control * time-harmonic Stokes problem * preconditioning Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716302631?via%3Dihub
2013-08-01
1962. 17Chapman, D. R., “An Approximate Analytical Method for Studying Entry Into Planetary Atmospheres ,” Technical Note 4276, National Advisory...the purely analytical cases to model a CAV in flight about a spherical rotating Earth.4 Further expansion of the CAV re- entry problem added...in booster and re- entry profiles is seen in Figure 8; the GPOPS optimal solution lowers the perigee heat flux rate by approximately 20 percent. 8 of
Affording and Constraining Local Moral Orders in Teacher-Led Ability-Based Mathematics Groups
Tait-McCutcheon, Sandi; Shuker, Mary Jane; Higgins, Joanna; Loveridge, Judith
2015-01-01
How teachers position themselves and their students can influence the development of afforded or constrained local moral orders in ability-based teacher-led mathematics lessons. Local moral orders are the negotiated discursive practices and interactions of participants in the group. In this article, the developing local moral orders of 12 teachers…
Paravidino, M.; Scheffelaar, R.; Schmitz, R.F.; de Kanter, F.J.J.; Groen, M.B.; Ruijter, E.; Orru, R.V.A.
2007-01-01
(Chemical Equation Presented) Highly functionalized and conformationally constrained depsipeptides based on a dihydropyridin-2-one core are prepared by the combination of a four- and a three-component reaction. The synthesis combines a one-pot Horner-Wadsworth-Emmons/cyclocondensation sequence
Directory of Open Access Journals (Sweden)
Junlong Zhu
2017-01-01
Full Text Available We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.
Shao, H.; Huang, Y.; Kolditz, O.
2015-12-01
Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in
Wing Rib Stress Analysis and Design Optimization Using Constrained Natural Element Method
Amine Bennaceur, Mohamed; Xu, Yuan-ming; Layachi, Hemza
2017-09-01
This paper demonstrates the applicability of a novel meshless method in solving problems related to aeronautical engineering, the constraint-natural element method is used to optimize a wing rib where it present several shape of cut-outs deals with the results findings we select the optimum design, we focus on the description and analysis of the constraint-natural element method and its application for simulating mechanical problems, the constraint natural element method is the alternative method for the finite element method where the shape functions is constructed on an extension of Voronoi diagram dual of Delaunay tessellation for non-convex domains.
Energy Technology Data Exchange (ETDEWEB)
Tupek, Michael R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-06-30
In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- put parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.
Lemanski, Natalie J; Fefferman, Nina H
2017-06-01
Honeybees are an excellent model system for examining how trade-offs shape reproductive timing in organisms with seasonal environments. Honeybee colonies reproduce two ways: producing swarms comprising a queen and thousands of workers or producing males (drones). There is an energetic trade-off between producing workers, which contribute to colony growth, and drones, which contribute only to reproduction. The timing of drone production therefore determines both the drones' likelihood of mating and when colonies reach sufficient size to swarm. Using a linear programming model, we ask when a colony should produce drones and swarms to maximize reproductive success. We find the optimal behavior for each colony is to produce all drones prior to swarming, an impossible solution on a population scale because queens and drones would never co-occur. Reproductive timing is therefore not solely determined by energetic trade-offs but by the game theoretic problem of coordinating the production of reproductives among colonies.
Ellis, John; Pilaftsis, Apostolos
2010-01-01
This note presents an analytic construction of the optimal unit-norm direction hat(x) = x/|x| that maximizes or minimizes the objective linear expression, B . hat{x}, subject to a system of linear constraints of the form [A] . x = 0, where x is an unknown n-dimensional real vector to be determined up to an overall normalization constant, 0 is an m-dimensional null vector, and the n-dimensional real vector B and the m\\times n-dimensional real matrix [A] (with m = 2) are given. The analytic solution to this problem can be expressed in terms of a combination of double wedge and Hodge-star products of differential forms.
Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation
F. Cong (Fei); C.W. Oosterlee (Cornelis)
2016-01-01
htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,
Fuzzy chance constrained linear programming model for scrap charge optimization in steel production
DEFF Research Database (Denmark)
Rong, Aiying; Lahdelma, Risto
2008-01-01
, the crisp equivalent of the fuzzy constraints should be less relaxed than that purely based on the concept of soft constraints. Based on the application context we adopt a strengthened version of soft constraints to interpret fuzzy constraints and form a crisp model with consistent and compact constraints...
How optimal stimuli for sensory neurons are constrained by network architecture.
DiMattina, Christopher; Zhang, Kechen
2008-03-01
Identifying the optimal stimuli for a sensory neuron is often a difficult process involving trial and error. By analyzing the relationship between stimuli and responses in feedforward and stable recurrent neural network models, we find that the stimulus yielding the maximum firing rate response always lies on the topological boundary of the collection of all allowable stimuli, provided that individual neurons have increasing input-output relations or gain functions and that the synaptic connections are convergent between layers with nondegenerate weight matrices. This result suggests that in neurophysiological experiments under these conditions, only stimuli on the boundary need to be tested in order to maximize the response, thereby potentially reducing the number of trials needed for finding the most effective stimuli. Even when the gain functions allow firing rate cutoff or saturation, a peak still cannot exist in the stimulus-response relation in the sense that moving away from the optimum stimulus always reduces the response. We further demonstrate that the condition for nondegenerate synaptic connections also implies that proper stimuli can independently perturb the activities of all neurons in the same layer. One example of this type of manipulation is changing the activity of a single neuron in a given processing layer while keeping that of all others constant. Such stimulus perturbations might help experimentally isolate the interactions of selected neurons within a network.
Directory of Open Access Journals (Sweden)
Shi Chen-guang
2014-08-01
Full Text Available A novel optimal power allocation algorithm for radar network systems is proposed for Low Probability of Intercept (LPI technology in modern electronic warfare. The algorithm is based on the LPI optimization. First, the Schleher intercept factor for a radar network is derived, and then the Schleher intercept factor is minimized by optimizing the transmission power allocation among netted radars in the network to guarantee target-tracking performance. Furthermore, the Nonlinear Programming Genetic Algorithm (NPGA is used to solve the resulting nonconvex, nonlinear, and constrained optimization problem. Numerical simulation results show the effectiveness of the proposed algorithm.
Application of Teaching Learning Based Optimization in antenna designing
Directory of Open Access Journals (Sweden)
S. Dwivedi
2015-07-01
Full Text Available Numerous optimization techniques are studied and applied on antenna designs to optimize various performance parameters. Authors used many Multiple Attributes Decision Making (MADM methods, which include, Weighted Sum Method (WSM, Weighted Product Method (WPM, Technique for Order Preference by Similarity to Ideal Solution (TOPSIS, Analytic Hierarchy Process (AHP, ELECTRE, etc. Of these many MADM methods, TOPSIS and AHP are more widely used decision making methods. Both TOPSIS and AHP are logical decision making approaches and deal with the problem of choosing an alternative from a set of alternatives which are characterized in terms of some attributes. Analytic Hierarchy Process (AHP is explained in detail and compared with WSM and WPM. Authors ﬁ- nally used Teaching-Learning-Based Optimization (TLBO technique; which is a novel method for constrained antenna design optimization problems.
Lifecycle-Based Swarm Optimization Method for Numerical Optimization
Directory of Open Access Journals (Sweden)
Hai Shen
2014-01-01
Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.
Reliability-based optimization of engineering structures
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2008-01-01
The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization......-assessment and a reliability-based decision problem for offshore wind turbines....
Diwale, Sanket Sanjay; Lymperopoulos, Ioannis; Jones, Colin
2015-01-01
Airborne wind energy systems are built to exploit the stronger and more consistent wind available at high altitudes that conventional wind turbines cannot reach. This however requires a reliable controller design that can keep the airborne system flying for long durations in varying environmental conditions, while respecting all operational constraints. A frequent design for such a system includes a flying airfoil tethered to a ground station. We demonstrate an on-line data based method that ...
Discrete Variables Function Optimization Using Accelerated Biogeography-Based Optimization
Lohokare, M. R.; Pattnaik, S. S.; Devi, S.; Panigrahi, B. K.; Das, S.; Jadhav, D. G.
Biogeography-Based Optimization (BBO) is a bio-inspired and population based optimization algorithm. This is mainly formulated to optimize functions of discrete variables. But the convergence of BBO to the optimum value is slow as it lacks in exploration ability. The proposed Accelerated Biogeography-Based Optimization (ABBO) technique is an improved version of BBO. In this paper, authors accelerated the original BBO to enhance the exploitation and exploration ability by modified mutation operator and clear duplicate operator. This significantly improves the convergence characteristics of the original algorithm. To validate the performance of ABBO, experiments have been conducted on unimodal and multimodal benchmark functions of discrete variables. The results shows excellent performance when compared with other modified BBOs and other optimization techniques like stud genetic algorithm (SGA) and ant colony optimization (ACO). The results are also analyzed by using two paired t- test.
Duality based contact shape optimization
DEFF Research Database (Denmark)
Vondrák, Vít; Dostal, Zdenek; Rasmussen, John
2001-01-01
An implementation of semi-analytic method for the sensitivity analysis in contact shape optimization without friction is described. This method is then applied to the contact shape optimization.......An implementation of semi-analytic method for the sensitivity analysis in contact shape optimization without friction is described. This method is then applied to the contact shape optimization....
Directory of Open Access Journals (Sweden)
Stanimirović Ivan
2009-01-01
Full Text Available We introduce a heuristic method for the single resource constrained project scheduling problem, based on the dynamic programming solution of the knapsack problem. This method schedules projects with one type of resources, in the non-preemptive case: once started an activity is not interrupted and runs to completion. We compare the implementation of this method with well-known heuristic scheduling method, called Minimum Slack First (known also as Gray-Kidd algorithm, as well as with Microsoft Project.
Palacios, S.G.
2015-01-01
In health facilities in resource-constrained settings, a lack of access to sustainable and reliable electricity can result on a sub-optimal delivery of healthcare services, as they do not have lighting for medical procedures and power to run essential equipment and devices to treat their patients.
Huang, Y L; Huang, G H; Liu, D F; Zhu, H; Sun, W
2012-10-15
Although integrated simulation and optimization approaches under stochastic uncertainty have been applied to eutrophication management problems, few studies are reported in eutrophication control planning where multiple formats of uncertainties and nonlinearities are addressed in forms of intervals and probabilistic distributions within an integrated framework. Since the impounding of Three Gorges Reservoir (TGR), China in 2003, the hydraulic conditions and aquatic environment of the Xiangxi Bay (XXB) have changed significantly. The resulting emergence of eutrophication and algal blooms leads to its deteriorated water quality. The XXB becomes an ideal case study area. Thus, a simulation-based inexact chance-constrained nonlinear programming (SICNP) model is developed and applied to eutrophication control planning in the XXB of the TGR under uncertainties. In the SICNP, the wastewater treatment costs for removing total phosphorus (TP) are set as the objective function; effluent discharge standards, stream water quality standards and eutrophication control standards are considered in the constraints; a steady-state simulation model for phosphorus transport and fate is embedded in the environmental standards constraints; the interval programming and chance-constrained approaches are integrated to provide interval decision variables but also the associated risk levels in violating the system constraints. The model results indicate that changes in the violating level (q) will result in different strategy distributions at spatial and temporal scales; the optimal value of cost objective is from [2.74, 13.41] million RMB to [2.25, 13.08] million RMB when q equals from 0.01 to 0.25; the required TP treatment efficiency for the Baisha plant is the most stringent, which is followed by the Xiakou Town and the Zhaojun Town, while the requirement for the Pingyikou cement plant is the least stringent. The model results are useful for making optimal policies on eutrophication
Mapping constrained optimization problems to quantum annealing with application to fault diagnosis
Directory of Open Access Journals (Sweden)
Aidan Roy
2016-07-01
Full Text Available Current quantum annealing (QA hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware.The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of the D-Wave hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's QA hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find emph{all} solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.
Detection of arc fault based on frequency constrained independent component analysis
Yang, Kai; Zhang, Rencheng; Xu, Renhao; Chen, Yongzhi; Yang, Jianhong; Chen, Shouhong
2015-02-01
Arc fault is one of the main reasons of electrical fires. As a result of weakness, randomness and cross talk of arc faults, very few of methods have been successfully used to protect loads from all arc faults in low-voltage circuits. Therefore, a novel detection method is developed for detection of arc faults. The method is based on frequency constrained independent component analysis. In the process of the method derivation, a band-pass filter was introduced as a constraint condition to separate independent components of mixed signals. In the process of the independent component separations, although the fault mixed signals were under the conditions of the strong background noise and the frequency aliasing, the effective high frequency components of arc faults could be separated by frequency constrained independent component analysis. Based on the separated components, the power spectrums of them were calculated to classify the normal and the arc fault conditions. The validity of the developed method was verified by using an arc fault experimental platform set up. The results show that arc faults of nine typical electrical loads are successfully detected based on frequency constrained independent component analysis.
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-04-01
Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimall1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download atwww.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework
Directory of Open Access Journals (Sweden)
Kyungsung An
2017-05-01
Full Text Available This research aims to improve the operational efficiency and security of electric power systems at high renewable penetration by exploiting the envisioned controllability or flexibility of electric vehicles (EVs; EVs interact with the grid through grid-to-vehicle (G2V and vehicle-to-grid (V2G services to ensure reliable and cost-effective grid operation. This research provides a computational framework for this decision-making process. Charging and discharging strategies of EV aggregators are incorporated into a security-constrained optimal power flow (SCOPF problem such that overall energy cost is minimized and operation within acceptable reliability criteria is ensured. Particularly, this SCOPF problem has been formulated for Jeju Island in South Korea, in order to lower carbon emissions toward a zero-carbon island by, for example, integrating large-scale renewable energy and EVs. On top of conventional constraints on the generators and line flows, a unique constraint on the system inertia constant, interpreted as the minimum synchronous generation, is considered to ensure grid security at high renewable penetration. The available energy constraint of the participating EV associated with the state-of-charge (SOC of the battery and market price-responsive behavior of the EV aggregators are also explored. Case studies for the Jeju electric power system in 2030 under various operational scenarios demonstrate the effectiveness of the proposed method and improved operational flexibility via controllable EVs.
Daneshian, Jahanbakhsh; Ramezani Dana, Leila; Sadler, Peter
2017-01-01
Benthic foraminifera species commonly outnumber planktic species in the type area of the Lower Miocene Qom Formation, in north central Iran, where it records the Tethyan link between the eastern Mediterranean and Indo- Pacific provinces. Because measured sections preserve very different sequences of first and last occurrences of these species, no single section provides a completely suitable baseline for correlation. To resolve this problem, we combined bioevents from three stratigraphic sections into a single composite sequence by constrained optimization (CONOP). The composite section arranges the first and last appearance events (FAD and LAD) of 242 foraminifera in an optimal order that minimizes the implied diachronism between sections. The composite stratigraphic ranges of the planktic foraminifera support a practical biozonation which reveals substantial local changes of accumulation rate during Aquitanian to Burdigalian times. Traditional biozone boundaries emerge little changed but an order of magnitude more correlations can be interpolated. The top of the section at Dobaradar is younger than previously thought and younger than sections at Dochah and Tigheh Reza-Abad. The latter two sections probably extend older into the Aquitanian than the Dobaradar section, but likely include a hiatus near the base of the Burdigalian. The bounding contacts with the Upper Red and Lower Red Formations are shown to be diachronous.
Slope constrained Topology Optimization
DEFF Research Database (Denmark)
Petersson, J.; Sigmund, Ole
1998-01-01
pointwise bounds on the density slopes. A finite element discretization procedure is described, and a proof of convergence of finite element solutions to exact solutions is given, as well as numerical examples obtained by a continuation/SLP (sequential linear programming) method. The convergence proof...
Multiobjective Optimization Based Vessel Collision Avoidance Strategy Optimization
Directory of Open Access Journals (Sweden)
Qingyang Xu
2014-01-01
Full Text Available The vessel collision accidents cause a great loss of lives and property. In order to reduce the human fault and greatly improve the safety of marine traffic, collision avoidance strategy optimization is proposed to achieve this. In the paper, a multiobjective optimization algorithm NSGA-II is adopted to search for the optimal collision avoidance strategy considering the safety as well as economy elements of collision avoidance. Ship domain and Arena are used to evaluate the collision risk in the simulation. Based on the optimization, an optimal rudder angle is recommended to navigator for collision avoidance. In the simulation example, a crossing encounter situation is simulated, and the NSGA-II searches for the optimal collision avoidance operation under the Convention on the International Regulations for Preventing Collisions at Sea (COLREGS. The simulation studies exhibit the validity of the method.
Reliability-Based Optimization in Structural Engineering
DEFF Research Database (Denmark)
Enevoldsen, I.; Sørensen, John Dalsgaard
1994-01-01
In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability......-based optimal inspection planning and reliability-based experiment planning. It is explained how these optimization problems can be solved by application of similar techniques. The reliability estimation is limited to first order reliability methods (FORM) for both component and systems reliability evaluation....... The solution strategies applying first order non-linear optimization algorithms are described in detail with special attention to sensitivity analysis and stability of the optimization process. Furthermore, several practical aspects are treated as: Development of the reliability-based optimization model...
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
A RSSI-based parameter tracking strategy for constrained position localization
Du, Jinze; Diouris, Jean-François; Wang, Yide
2017-12-01
In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Directory of Open Access Journals (Sweden)
Jianzhong Wang
Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.
Interactive Reliability-Based Optimal Design
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.
1994-01-01
Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... be used in interactive optimization....
Mang, A; Toma, A; Schuetz, T A; Becker, S; Buzug, T M
2012-01-01
In the present paper a novel computational framework for modeling tumor induced brain deformation as a biophysical prior for non-rigid image registration is described. More precisely, we aim at providing a generic building block for non-rigid image registration that can be used to resolve inherent irregularities in non-diffeomorphic registration problems that naturally arise in serial and cross-population brain tumor imaging studies due to the presence (or progression) of pathology. The model for the description of brain cancer dynamics on a tissue level is based on an initial boundary value problem (IBVP). The IBVP follows the accepted assumption that the progression of primary brain tumors on a tissue level is governed by proliferation and migration of cancerous cells into surrounding healthy tissue. The model of tumor induced brain deformation is phrased as a parametric, constrained optimization problem. As a basis of comparison and to demonstrate generalizability additional soft constraints (penalties) are considered. A back-tracking line search is implemented in conjunction with a limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) method in order to handle the numerically delicate log-barrier strategy for confining volume change. Numerical experiments are performed to test the flexible control of the computed deformation patterns in terms of varying model parameters. The results are qualitatively and quantitatively related to patterns in patient individual magnetic resonance imaging data. Numerical experiments demonstrate the flexible control of the computed deformation patterns. This in turn strongly suggests that the model can be adapted to patient individual imaging patterns of brain tumors. Qualitative and quantitative comparison of the computed cancer profiles to patterns in medical imaging data of an exemplary patient demonstrates plausibility. The designed optimization problem is based on computational tools widely used in non-rigid image
Reachable Distance Space: Efficient Sampling-Based Planning for Spatially Constrained Systems
Xinyu Tang,
2010-01-25
Motion planning for spatially constrained robots is difficult due to additional constraints placed on the robot, such as closure constraints for closed chains or requirements on end-effector placement for articulated linkages. It is usually computationally too expensive to apply sampling-based planners to these problems since it is difficult to generate valid configurations. We overcome this challenge by redefining the robot\\'s degrees of freedom and constraints into a new set of parameters, called reachable distance space (RD-space), in which all configurations lie in the set of constraint-satisfying subspaces. This enables us to directly sample the constrained subspaces with complexity linear in the number of the robot\\'s degrees of freedom. In addition to supporting efficient sampling of configurations, we show that the RD-space formulation naturally supports planning and, in particular, we design a local planner suitable for use by sampling-based planners. We demonstrate the effectiveness and efficiency of our approach for several systems including closed chain planning with multiple loops, restricted end-effector sampling, and on-line planning for drawing/sculpting. We can sample single-loop closed chain systems with 1,000 links in time comparable to open chain sampling, and we can generate samples for 1,000-link multi-loop systems of varying topologies in less than a second. © 2010 The Author(s).
Electrochemical model based charge optimization for lithium-ion batteries
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Stochastic Fractal Based Multiobjective Fruit Fly Optimization
Directory of Open Access Journals (Sweden)
Zuo Cili
2017-06-01
Full Text Available The fruit fly optimization algorithm (FOA is a global optimization algorithm inspired by the foraging behavior of a fruit fly swarm. In this study, a novel stochastic fractal model based fruit fly optimization algorithm is proposed for multiobjective optimization. A food source generating method based on a stochastic fractal with an adaptive parameter updating strategy is introduced to improve the convergence performance of the fruit fly optimization algorithm. To deal with multiobjective optimization problems, the Pareto domination concept is integrated into the selection process of fruit fly optimization and a novel multiobjective fruit fly optimization algorithm is then developed. Similarly to most of other multiobjective evolutionary algorithms (MOEAs, an external elitist archive is utilized to preserve the nondominated solutions found so far during the evolution, and a normalized nearest neighbor distance based density estimation strategy is adopted to keep the diversity of the external elitist archive. Eighteen benchmarks are used to test the performance of the stochastic fractal based multiobjective fruit fly optimization algorithm (SFMOFOA. Numerical results show that the SFMOFOA is able to well converge to the Pareto fronts of the test benchmarks with good distributions. Compared with four state-of-the-art methods, namely, the non-dominated sorting generic algorithm (NSGA-II, the strength Pareto evolutionary algorithm (SPEA2, multi-objective particle swarm optimization (MOPSO, and multiobjective self-adaptive differential evolution (MOSADE, the proposed SFMOFOA has better or competitive multiobjective optimization performance.
Raffensperger, Jeff P.; Baker, Anna C.; Blomquist, Joel D.; Hopple, Jessica A.
2017-06-26
Quantitative estimates of base flow are necessary to address questions concerning the vulnerability and response of the Nation’s water supply to natural and human-induced change in environmental conditions. An objective of the U.S. Geological Survey National Water-Quality Assessment Project is to determine how hydrologic systems are affected by watershed characteristics, including land use, land cover, water use, climate, and natural characteristics (geology, soil type, and topography). An important component of any hydrologic system is base flow, generally described as the part of streamflow that is sustained between precipitation events, fed to stream channels by delayed (usually subsurface) pathways, and more specifically as the volumetric discharge of water, estimated at a measurement site or gage at the watershed scale, which represents groundwater that discharges directly or indirectly to stream reaches and is then routed to the measurement point.Hydrograph separation using a recursive digital filter was applied to 225 sites in the Chesapeake Bay watershed. The recursive digital filter was chosen for the following reasons: it is based in part on the assumption that groundwater acts as a linear reservoir, and so has a physical basis; it has only two adjustable parameters (alpha, obtained directly from recession analysis, and beta, the maximum value of the base-flow index that can be modeled by the filter), which can be determined objectively and with the same physical basis of groundwater reservoir linearity, or that can be optimized by applying a chemical-mass-balance constraint. Base-flow estimates from the recursive digital filter were compared with those from five other hydrograph-separation methods with respect to two metrics: the long-term average fraction of streamflow that is base flow, or base-flow index, and the fraction of days where streamflow is entirely base flow. There was generally good correlation between the methods, with some biased
Energy Technology Data Exchange (ETDEWEB)
Almeida Bezerra, Marcos, E-mail: mbezerra47@yahoo.com.br [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil); Teixeira Castro, Jacira [Universidade Federal do Reconcavo da Bahia, Centro de Ciencias Exatas e Tecnologicas, 44380-000, Cruz das Almas, Bahia (Brazil); Coelho Macedo, Reinaldo; Goncalves da Silva, Douglas [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil)
2010-06-18
A slurry suspension sampling technique has been developed for manganese and zinc determination in tea leaves by using flame atomic absorption spectrometry. The proportions of liquid-phase of the slurries composed by HCl, HNO{sub 3} and Triton X-100 solutions have been optimized applying a constrained mixture design. The optimized conditions were 200 mg of sample ground in a tungsten carbide balls mill (particle size < 100 {mu}m), dilution in a liquid-phase composed by 2.0 mol L{sup -1} nitric, 2.0 mol L{sup -1} hydrochloric acid and 2.5% Triton X-100 solutions (in the proportions of 50%, 12% and 38% respectively), sonication time of 10 min and final slurry volume of 50.0 mL. This method allowed the determination of manganese and zinc by FAAS, with detection limits of 0.46 and 0.66 {mu}g g{sup -1}, respectively. The precisions, expressed as relative standard deviation (RSD), are 6.9 and 5.5% (n = 10), for concentrations of manganese and zinc of 20 and 40 {mu}g g{sup -1}, respectively. The accuracy of the method was confirmed by analysis of the certified apple leaves (NIST 1515) and spinach leaves (NIST 1570a). The proposed method was applied for the determination of manganese and zinc in tea leaves used for the preparation of infusions. The obtained concentrations varied between 42 and 118 {mu}g g{sup -1} and 18.6 and 90 {mu}g g{sup -1}, respectively, for manganese and zinc. The results were compared with those obtained by an acid digestion procedure and determination of the elements by FAAS. There was no significant difference between the results obtained by the two methods based on a paired t-test (at 95% confidence level).
Bonnet, V; Dumas, R; Cappozzo, A; Joukov, V; Daune, G; Kulić, D; Fraisse, P; Andary, S; Venture, G
2017-09-06
This paper presents a method for real-time estimation of the kinematics and kinetics of a human body performing a sagittal symmetric motor task, which would minimize the impact of the stereophotogrammetric soft tissue artefacts (STA). The method is based on a bi-dimensional mechanical model of the locomotor apparatus the state variables of which (joint angles, velocities and accelerations, and the segments lengths and inertial parameters) are estimated by a constrained extended Kalman filter (CEKF) that fuses input information made of both stereophotogrammetric and dynamometric measurement data. Filter gains are made to saturate in order to obtain plausible state variables and the measurement covariance matrix of the filter accounts for the expected STA maximal amplitudes. We hypothesised that the ensemble of constraints and input redundant information would allow the method to attenuate the STA propagation to the end results. The method was evaluated in ten human subjects performing a squat exercise. The CEKF estimated and measured skin marker trajectories exhibited a RMS difference lower than 4mm, thus in the range of STAs. The RMS differences between the measured ground reaction force and moment and those estimated using the proposed method (9N and 10Nm) were much lower than obtained using a classical inverse dynamics approach (22N and 30Nm). From the latter results it may be inferred that the presented method allows for a significant improvement of the accuracy with which kinematic variables and relevant time derivatives, model parameters and, therefore, intersegmental moments are estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2014-01-01
descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.
Mystakidis, Stefanos; Davin, Edouard L; Gruber, Nicolas; Seneviratne, Sonia I
2016-06-01
The terrestrial biosphere is currently acting as a sink for about a third of the total anthropogenic CO2 emissions. However, the future fate of this sink in the coming decades is very uncertain, as current earth system models (ESMs) simulate diverging responses of the terrestrial carbon cycle to upcoming climate change. Here, we use observation-based constraints of water and carbon fluxes to reduce uncertainties in the projected terrestrial carbon cycle response derived from simulations of ESMs conducted as part of the 5th phase of the Coupled Model Intercomparison Project (CMIP5). We find in the ESMs a clear linear relationship between present-day evapotranspiration (ET) and gross primary productivity (GPP), as well as between these present-day fluxes and projected changes in GPP, thus providing an emergent constraint on projected GPP. Constraining the ESMs based on their ability to simulate present-day ET and GPP leads to a substantial decrease in the projected GPP and to a ca. 50% reduction in the associated model spread in GPP by the end of the century. Given the strong correlation between projected changes in GPP and in NBP in the ESMs, applying the constraints on net biome productivity (NBP) reduces the model spread in the projected land sink by more than 30% by 2100. Moreover, the projected decline in the land sink is at least doubled in the constrained ensembles and the probability that the terrestrial biosphere is turned into a net carbon source by the end of the century is strongly increased. This indicates that the decline in the future land carbon uptake might be stronger than previously thought, which would have important implications for the rate of increase in the atmospheric CO2 concentration and for future climate change. © 2016 John Wiley & Sons Ltd.
Physically constrained voxel-based penalty adaptation for ultra-fast IMRT planning
Wahl, Niklas; Bangert, Mark; Kamerling, Cornelis P.; Ziegenhein, Peter; Bol, GH|info:eu-repo/dai/nl/343084309; Raaymakers, Bas W.|info:eu-repo/dai/nl/229639410; Oelfke, Uwe
2016-01-01
Conventional treatment planning in intensity-modulated radiation therapy (IMRT) is a trial-and-error process that usually involves tedious tweaking of optimization parameters. Here, we present an algorithm that automates part of this process, in particular the adaptation of voxel-based penalties
Classifiers based on optimal decision rules
Amin, Talha
2013-11-25
Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).
Optimization-Based Models of Muscle Coordination
Prilutsky, Boris I.; Zatsiorsky, Vladimir M.
2002-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising.
Optimization-based models of muscle coordination.
Prilutsky, Boris I; Zatsiorsky, Vladimir M
2002-01-01
Optimization-based models may provide reasonably accurate estimates of activation and force patterns of individual muscles in selected well-learned tasks with submaximal efforts. Such optimization criteria as minimum energy expenditure, minimum muscle fatigue, and minimum sense of effort seem most promising.
GA BASED GLOBAL OPTIMAL DESIGN PARAMETERS FOR ...
African Journals Online (AJOL)
This article uses Genetic Algorithm (GA) for the global design optimization of consecutive reactions taking place in continuous stirred tank reactors (CSTRs) connected in series. GA based optimal design determines the optimum number of CSTRs in series to achieve the maximum conversion, fractional yield and selectivity ...
Reliability Based Optimization of Structural Systems
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
1987-01-01
The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement...
Biyikli, Emre; To, Albert C
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org.
Directory of Open Access Journals (Sweden)
Emre Biyikli
Full Text Available A new topology optimization method called the Proportional Topology Optimization (PTO is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org.
Directory of Open Access Journals (Sweden)
Qunyi Xie
2016-01-01
Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.
IMU-based ambulatory walking speed estimation in constrained treadmill and overground walking.
Yang, Shuozhi; Li, Qingguo
2012-01-01
This study evaluated the performance of a walking speed estimation system based on using an inertial measurement unit (IMU), a combination of accelerometers and gyroscopes. The walking speed estimation algorithm segments the walking sequence into individual stride cycles (two steps) based on the inverted pendulum-like behaviour of the stance leg during walking and it integrates the angular velocity and linear accelerations of the shank to determine the displacement of each stride. The evaluation was performed in both treadmill and overground walking experiments with various constraints on walking speed, step length and step frequency to provide a relatively comprehensive assessment of the system. Promising results were obtained in providing accurate and consistent walking speed/step length estimation in different walking conditions. An overall percentage root mean squared error (%RMSE) of 4.2 and 4.0% was achieved in treadmill and overground walking experiments, respectively. With an increasing interest in understanding human walking biomechanics, the IMU-based ambulatory system could provide a useful walking speed/step length measurement/control tool for constrained walking studies.
Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian
2013-07-09
In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.
Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty
Directory of Open Access Journals (Sweden)
Bo Chen
2015-05-01
Full Text Available Segmentation, which is usually the first step in object-based image analysis (OBIA, greatly influences the quality of final OBIA results. In many existing multi-scale segmentation algorithms, a common problem is that under-segmentation and over-segmentation always coexist at any scale. To address this issue, we propose a new method that integrates the newly developed constrained spectral variance difference (CSVD and the edge penalty (EP. First, initial segments are produced by a fast scan. Second, the generated segments are merged via a global mutual best-fitting strategy using the CSVD and EP as merging criteria. Finally, very small objects are merged with their nearest neighbors to eliminate the remaining noise. A series of experiments based on three sets of remote sensing images, each with different spatial resolutions, were conducted to evaluate the effectiveness of the proposed method. Both visual and quantitative assessments were performed, and the results show that large objects were better preserved as integral entities while small objects were also still effectively delineated. The results were also found to be superior to those from eCongnition’s multi-scale segmentation.
A Benders decomposition-based Matheuristic for the Cardinality Constrained Shift Design Problem
DEFF Research Database (Denmark)
Lusby, Richard Martin; Larsen, Jesper; Range, Troels Martin
The Shift Design Problem is an important optimization problem which arises when scheduling personnel in industries that require continuous operation. Based on the forecast, required staffing levels for a set of time periods, a set of shift types that best covers the demand must be determined...... Integer Programming solver on instances with 1241 different shift types and remains competitive for larger cases with 2145 shift types. On all classes of problems the heuristic is able to quickly nd good solutions....
Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems
National Research Council Canada - National Science Library
Hong, Man; Cheng, Shao
2013-01-01
To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR) is proposed...
Optimizing C-17 Pacific Basing
2014-05-01
Americana, as they relate to this study, date back to the late 1800s. In 1867, the US agreed to purchase Alaska from Russia. The Alaska Purchase ...ended Russia’s presence in North America and ensured US access to the northern rim of the Pacific. The US constituted an Alaskan civil government in...where they are based. Likewise, if involved in a JAATT at an Alaskan training facility, the C-17s would likely travel to PAED no matter where they
Directory of Open Access Journals (Sweden)
Baofeng Cai
2017-08-01
Full Text Available The Interconnected River System Network Project (IRSNP is a significant water supply engineering project, which is capable of effectively utilizing flood resources to generate ecological value, by connecting 198 lakes and ponds in western Jilin, northeast China. In this article, an optimization research approach has been proposed to maximize the incremental value of IRSNP ecosystem services. A double-sided chance-constrained integer linear program (DCCILP method has been proposed to support the optimization, which can deal with uncertainties presented as integers or random parameters that appear on both sides of the decision variable at the same time. The optimal scheme indicates that after rational optimization, the total incremental value of ecosystem services from the interconnected river system network project increased 22.25%, providing an increase in benefits of 3.26 × 109 ¥ compared to the original scheme. Most of the functional area is swamp wetland, which provides the greatest ecological benefits. Adjustment services increased obviously, implying that the optimization scheme prioritizes ecological benefits rather than supply and production services.
Optimal design and selection of magneto-rheological brake types based on braking torque and mass
Nguyen, Q. H.; Lang, V. T.; Choi, S. B.
2015-06-01
In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass.
Metamodel-Based Optimization of the Labyrinth Seal
Directory of Open Access Journals (Sweden)
Rulik Sebastian
2017-03-01
Full Text Available The presented paper concerns CFD optimization of the straight-through labyrinth seal with a smooth land. The aim of the process was to reduce the leakage flow through a labyrinth seal with two fins. Due to the complexity of the problem and for the sake of the computation time, a decision was made to modify the standard evolutionary optimization algorithm by adding an approach based on a metamodel. Five basic geometrical parameters of the labyrinth seal were taken into account: the angles of the seal’s two fins, and the fin width, height and pitch. Other parameters were constrained, including the clearance over the fins. The CFD calculations were carried out using the ANSYS-CFX commercial code. The in-house optimization algorithm was prepared in the Matlab environment. The presented metamodel was built using a Multi-Layer Perceptron Neural Network which was trained using the Levenberg-Marquardt algorithm. The Neural Network training and validation were carried out based on the data from the CFD analysis performed for different geometrical configurations of the labyrinth seal. The initial response surface was built based on the design of the experiment (DOE. The novelty of the proposed methodology is the steady improvement in the response surface goodness of fit. The accuracy of the response surface is increased by CFD calculations of the labyrinth seal additional geometrical configurations. These configurations are created based on the evolutionary algorithm operators such as selection, crossover and mutation. The created metamodel makes it possible to run a fast optimization process using a previously prepared response surface. The metamodel solution is validated against CFD calculations. It then complements the next generation of the evolutionary algorithm.
Nonlinear model predictive control based on collective neurodynamic optimization.
Yan, Zheng; Wang, Jun
2015-04-01
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.
Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli
2017-09-12
Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.
DSP code optimization based on cache
Xu, Chengfa; Li, Chengcheng; Tang, Bin
2013-03-01
DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.
National Research Council Canada - National Science Library
Moreno-Salinas, David; Pascoal, Antonio; Aranda, Joaquin
2013-01-01
In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning...
Liu, S.; Farid, S. S.; Papageorgiou, L. G.
2016-01-01
This work addresses the integrated optimization of upstream and downstream processing strategies of a monoclonal antibody (mAb) under uncertainty. In the upstream processing (USP), the bioreactor sizing strategies are optimized, while in the downstream processing (DSP), the chromatography sequencing and column sizing strategies, including the resin at each chromatography step, the number of columns, the column diameter and bed height, and the number of cycles per batch, are determined. Meanwh...
Directory of Open Access Journals (Sweden)
YAN Li
2016-04-01
Full Text Available This paper proposes a rigorous registration method of multi-view point clouds constrained by closed-loop conditions for the problems of existing algorithms. In our approach, the point-to-tangent-plane iterative closest point algorithm is used firstly to calculate coordinate transformation parameters of all adjacent point clouds respectively. Then the single-site point cloud is regarded as registration unit and the transformation parameters are considered as random observations to construct conditional equations, and then the transformation parameters can be corrected by conditional adjustments to achieve global optimum. Two practical experiments of point clouds acquired by a terrestrial laser scanner are shown for demonstrating the feasibility and validity of our methods. Experimental results show that the registration accuracy and reliability of the point clouds with sampling interval of millimeter or centimeter level can be improved by increasing the scanning overlap.
A Novel Approach to Constrain Near-Surface Seismic Wave Speed Based on Polarization Analysis
Park, S.; Ishii, M.
2016-12-01
Understanding the seismic responses of cities around the world is essential for the risk assessment of earthquake hazards. One of the important parameters is the elastic structure of the sites, in particular, near-surface seismic wave speed, that influences the level of ground shaking. Many methods have been developed to constrain the elastic structure of the populated sites or urban basins, and here, we introduce a new technique based on analyzing the polarization content or the three-dimensional particle motion of seismic phases arriving at the sites. Polarization analysis of three-component seismic data was widely used up to about two decades ago, to detect signals and identify different types of seismic arrivals. Today, we have good understanding of the expected polarization direction and ray parameter for seismic wave arrivals that are calculated based on a reference seismic model. The polarization of a given phase is also strongly sensitive to the elastic wave speed immediately beneath the station. This allows us to compare the observed and predicted polarization directions of incoming body waves and infer the near-surface wave speed. This approach is applied to High-Sensitivity Seismograph Network in Japan, where we benchmark the results against the well-log data that are available at most stations. There is a good agreement between our estimates of seismic wave speeds and those from well logs, confirming the efficacy of the new method. In most urban environments, where well logging is not a practical option for measuring the seismic wave speeds, this method can provide a reliable, non-invasive, and computationally inexpensive estimate of near-surface elastic properties.
A Benders Decomposition-Based Matheuristic for the Cardinality Constrained Shift Design Problem
DEFF Research Database (Denmark)
Lusby, Richard Martin; Range, Troels Martin; Larsen, Jesper
2016-01-01
The Shift Design Problem is an important optimization problem which arises when scheduling personnel in industries that require continuous operation. Based on the forecast, required staffing levels for a set of time periods, a set of shift types that best covers the demand must be determined...... integer programming solver on instances with 1241 different shift types and remains competitive for larger cases with 2145 shift types. On all classes of problems the heuristic is able to quickly find good solutions. © 2016 Elsevier B.V. All rights reserved...
Coverage-based constraints for IMRT optimization.
Mescher, H; Ulrich, S; Bangert, M
2017-09-05
Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities [Formula: see text] of covering a specific target volume fraction [Formula: see text] with a certain dose [Formula: see text]. Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target
Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua
2017-10-01
An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS.
Power-constrained supercomputing
Bailey, Peter E.
. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels
Directory of Open Access Journals (Sweden)
Jing Liu
2017-11-01
Full Text Available In this study, an interval fuzzy-stochastic chance-constrained programming based energy-water nexus (IFSCP-WEN model is developed for planning electric power system (EPS. The IFSCP-WEN model can tackle uncertainties expressed as possibility and probability distributions, as well as interval values. Different credibility (i.e., γ levels and probability (i.e., qi levels are set to reflect relationships among water supply, electricity generation, system cost, and constraint-violation risk. Results reveal that different γ and qi levels can lead to a changed system cost, imported electricity, electricity generation, and water supply. Results also disclose that the study EPS would tend to the transition from coal-dominated into clean energy-dominated. Gas-fired would be the main electric utility to supply electricity at the end of the planning horizon, occupying [28.47, 30.34]% (where 28.47% and 30.34% present the lower bound and the upper bound of interval value, respectively of the total electricity generation. Correspondingly, water allocated to gas-fired would reach the highest, occupying [33.92, 34.72]% of total water supply. Surface water would be the main water source, accounting for more than [40.96, 43.44]% of the total water supply. The ratio of recycled water to total water supply would increase by about [11.37, 14.85]%. Results of the IFSCP-WEN model present its potential for sustainable EPS planning by co-optimizing energy and water resources.
Directory of Open Access Journals (Sweden)
Susmita Sarkar
Full Text Available The re-emergence of tuberculosis (TB as a global public health threat highlights the necessity of rapid, simple and inexpensive point-of-care detection of the disease. Early diagnosis of TB is vital not only for preventing the spread of the disease but also for timely initiation of treatment. The later in turn will reduce the possible emergence of multi-drug resistant strains of Mycobacterium tuberculosis. Lipoarabinomannan (LAM is an important non-protein antigen of the bacterial cell wall, which is found to be present in different body fluids of infected patients including blood, urine and sputum. We have developed a bispecific monoclonal antibody with predetermined specificities towards the LAM antigen and a reporter molecule horseradish peroxidase (HRPO. The developed antibody was subsequently used to design a simple low cost immunoswab based assay to detect LAM antigen. The limit of detection for spiked synthetic LAM was found to be 5.0 ng/ml (bovine urine, 0.5 ng/ml (rabbit serum and 0.005 ng/ml (saline and that for bacterial LAM from M. tuberculosis H37Rv was found to be 0.5 ng/ml (rabbit serum. The assay was evaluated with 21 stored clinical serum samples (14 were positive and 7 were negative in terms of anti-LAM titer. In addition, all 14 positive samples were culture positive. The assay showed 100% specificity and 64% sensitivity (95% confidence interval. In addition to good specificity, the end point could be read visually within two hours of sample collection. The reported assay might be used as a rapid tool for detecting TB in resource constrained laboratory settings.
Sarkar, Susmita; Tang, Xinli L; Das, Dipankar; Spencer, John S; Lowary, Todd L; Suresh, Mavanur R
2012-01-01
The re-emergence of tuberculosis (TB) as a global public health threat highlights the necessity of rapid, simple and inexpensive point-of-care detection of the disease. Early diagnosis of TB is vital not only for preventing the spread of the disease but also for timely initiation of treatment. The later in turn will reduce the possible emergence of multi-drug resistant strains of Mycobacterium tuberculosis. Lipoarabinomannan (LAM) is an important non-protein antigen of the bacterial cell wall, which is found to be present in different body fluids of infected patients including blood, urine and sputum. We have developed a bispecific monoclonal antibody with predetermined specificities towards the LAM antigen and a reporter molecule horseradish peroxidase (HRPO). The developed antibody was subsequently used to design a simple low cost immunoswab based assay to detect LAM antigen. The limit of detection for spiked synthetic LAM was found to be 5.0 ng/ml (bovine urine), 0.5 ng/ml (rabbit serum) and 0.005 ng/ml (saline) and that for bacterial LAM from M. tuberculosis H37Rv was found to be 0.5 ng/ml (rabbit serum). The assay was evaluated with 21 stored clinical serum samples (14 were positive and 7 were negative in terms of anti-LAM titer). In addition, all 14 positive samples were culture positive. The assay showed 100% specificity and 64% sensitivity (95% confidence interval). In addition to good specificity, the end point could be read visually within two hours of sample collection. The reported assay might be used as a rapid tool for detecting TB in resource constrained laboratory settings.
Chang, Ying-Chun; Yeh, Long-Jyi; Chiu, Min-Chie; Lai, Gaung-Jer
2005-09-01
The problem of space constraint in sound absorber design occasionally occurs in practical design work for the requisite maintenance and access. To ultimate the performance of sound absorption under space constraint, the shape optimization to maximize the absorber's performance is thus arising, accordingly. In this paper, the numerical optimal studies on single-layer sound absorber are presented. Two categories of numerical approaches, including the genetic algorithm ( GA) method and traditional gradient methods, are applied into the muffler design work. The acoustic impedance of sound absorber in evaluating the sound absorption coefficient is in conjunction with these numerical techniques. Here, GA technique and traditional gradient methods are programmed by MATLAB and FORTRAN individually. A numerical case on the single-layer sound absorber in dealing with the pure tone noise is introduced. Before optimization, one example is tested and compared with the experimental data for an accuracy check of the mathematical model. It indicates that the result is in good agreements. Consequently, the novel scheme, genetic algorithm, is capable of solving the shape optimization of a single-layer sound absorber under boundary constraint.
Improved Biogeography-Based Optimization Based on Affinity Propagation
Zhihao Wang; Peiyu Liu; Min Ren; Yuzhen Yang; Xiaoyan Tian
2016-01-01
To improve the search ability of biogeography-based optimization (BBO), this work proposed an improved biogeography-based optimization based on Affinity Propagation. We introduced the Memetic framework to the BBO algorithm, and used the simulated annealing algorithm as the local search strategy. MBBO enhanced the exploration with the Affinity Propagation strategy to improve the transfer operation of the BBO algorithm. In this work, the MBBO algorithm was applied to IEEE Congress on Evolutiona...
Reliability-Based Optimization of Wind Turbines
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Tarp-Johansen, N.J.
2004-01-01
Reliability-based optimization of the main tower and monopile foundation of an offshore wind turbine is considered. Different formulations are considered of the objective function including benefits and building and failure costs of the wind turbine. Also different reconstruction policies in case...
Reliability Based Optimization of Fire Protection
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
It is well known that fire is one of the major risks of serious damage or total loss of several types of structures such as nuclear installations, buildings, offshore platforms/topsides etc. This paper presents a methodology and software for reliability based optimization of the layout of passive...
DEFF Research Database (Denmark)
Jing, Lishuai; Pedersen, Troels; Fleury, Bernard Henri
2013-01-01
We address the problem of searching for the optimal pilot signal, i.e. pattern and signature, of an orthogonal frequency-division multiplexing (OFDM) system when the purpose is to estimate the delay and Doppler shift under the assumption of a single-path propagation channel. This problem...... demonstrate that data interference causes a performance loss if a standard non-coherent correlator is used. The results also indicate that the pilot pattern impacts the estimator's performance more than the pilot signature....
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.
Energy Technology Data Exchange (ETDEWEB)
Delbos, F.
2004-11-01
Reflexion tomography allows the determination of a subsurface velocity model from the travel times of seismic waves. The introduction of a priori information in this inverse problem can lead to the resolution of a constrained non-linear least-squares problem. The goal of the thesis is to improve the resolution techniques of this optimization problem, whose main difficulties are its ill-conditioning, its large scale and an expensive cost function in terms of CPU time. Thanks to a detailed study of the problem and to numerous numerical experiments, we justify the use of a sequential quadratic programming method, in which the tangential quadratic programs are solved by an original augmented Lagrangian method. We show the global linear convergence of the latter. The efficiency and robustness of the approach are demonstrated on several synthetic examples and on two real data cases. (author)
Energy Technology Data Exchange (ETDEWEB)
Colle, C.; Van den Berge, D.; De Wagter, C.; Fortan, L.; Van Duyse, B.; De Neve, W.
1995-12-01
The design of 3D-conformal dose distributions for targets with concave outlines is a technical challenge in conformal radiotherapy. For these targets, it is impossible to find beam incidences for which the target volume can be isolated from the tissues at risk. Commonly occurring examples are most thyroid cancers and the targets located at the lower neck and upper mediastinal levels related to some head and neck. A solution to this problem was developed, using beam intensity modulation executed with a multileaf collimator by applying a static beam-segmentation technique. The method includes the definition of beam incidences and beam segments of specific shape as well as the calculation of segment weights. Tests on Sherouse`s GRATISTM planning system allowed to escalate the dose to these targets to 65-70 Gy without exceeding spinal cord tolerance. Further optimization by constrained matrix inversion was investigated to explore the possibility of further dose escalation.
Optimization-Based Wearable Tactile Rendering.
Perez, Alvaro G; Lobo, Daniel; Chinello, Francesco; Cirio, Gabriel; Malvezzi, Monica; Martin, Jose San; Prattichizzo, Domenico; Otaduy, Miguel A
2017-01-01
Novel wearable tactile interfaces offer the possibility to simulate tactile interactions with virtual environments directly on our skin. But, unlike kinesthetic interfaces, for which haptic rendering is a well explored problem, they pose new questions about the formulation of the rendering problem. In this work, we propose a formulation of tactile rendering as an optimization problem, which is general for a large family of tactile interfaces. Based on an accurate simulation of contact between a finger model and the virtual environment, we pose tactile rendering as the optimization of the device configuration, such that the contact surface between the device and the actual finger matches as close as possible the contact surface in the virtual environment. We describe the optimization formulation in general terms, and we also demonstrate its implementation on a thimble-like wearable device. We validate the tactile rendering formulation by analyzing its force error, and we show that it outperforms other approaches.
Chicken Swarm Optimization Based on Elite Opposition-Based Learning
Directory of Open Access Journals (Sweden)
Chiwen Qu
2017-01-01
Full Text Available Chicken swarm optimization is a new intelligent bionic algorithm, simulating the chicken swarm searching for food in nature. Basic algorithm is likely to fall into a local optimum and has a slow convergence rate. Aiming at these deficiencies, an improved chicken swarm optimization algorithm based on elite opposition-based learning is proposed. In cock swarm, random search based on adaptive t distribution is adopted to replace that based on Gaussian distribution so as to balance the global exploitation ability and local development ability of the algorithm. In hen swarm, elite opposition-based learning is introduced to promote the population diversity. Dimension-by-dimension greedy search mode is used to do local search for individual of optimal chicken swarm in order to improve optimization precision. According to the test results of 18 standard test functions and 2 engineering structure optimization problems, this algorithm has better effect on optimization precision and speed compared with basic chicken algorithm and other intelligent optimization algorithms.
Sun, Kang
As the third most abundant nitrogen species in the atmosphere, ammonia (NH3) is a key component of the global nitrogen cycle. Since the industrial revolution, humans have more than doubled the emissions of NH3 to the atmosphere by industrial nitrogen fixation, revolutionizing agricultural practices, and burning fossil fuels. NH3 is a major precursor to fine particulate matter (PM2.5), which has adverse impacts on air quality and human health. The direct and indirect aerosol radiative forcings currently constitute the largest uncertainties for future climate change predictions. Gas and particle phase NH3 eventually deposits back to the Earth's surface as reactive nitrogen, leading to the exceedance of ecosystem critical loads and perturbation of ecosystem productivity. Large uncertainties still remain in estimating the magnitude and spatiotemporal patterns of NH3 emissions from all sources and over a range of scales. These uncertainties in emissions also propagate to the deposition of reactive nitrogen. To improve our understanding of NH3 emissions, observational constraints are needed from local to global scales. The first part of this thesis is to provide quality-controlled, reliable NH3 measurements in the field using an open-path, quantum cascade laser-based NH3 sensor. As the second and third part of my research, NH3 emissions were quantified from a cattle feedlot using eddy covariance (EC) flux measurements, and the similarities between NH3 turbulent fluxes and those of other scalars (temperature, water vapor, and CO2) were investigated. The fourth part involves applying a mobile laboratory equipped with the open-path NH3 sensor and other important chemical/meteorological measurements to quantify fleet-integrated NH3 emissions from on-road vehicles. In the fifth part, the on-road measurements were extended to multiple major urban areas in both the US and China in the context of five observation campaigns. The results significantly improved current urban NH3
Directory of Open Access Journals (Sweden)
Nouara Yahi
Full Text Available Membrane lipids play a pivotal role in the pathogenesis of Alzheimer's disease, which is associated with conformational changes, oligomerization and/or aggregation of Alzheimer's beta-amyloid (Abeta peptides. Yet conflicting data have been reported on the respective effect of cholesterol and glycosphingolipids (GSLs on the supramolecular assembly of Abeta peptides. The aim of the present study was to unravel the molecular mechanisms by which cholesterol modulates the interaction between Abeta(1-40 and chemically defined GSLs (GalCer, LacCer, GM1, GM3. Using the Langmuir monolayer technique, we show that Abeta(1-40 selectively binds to GSLs containing a 2-OH group in the acyl chain of the ceramide backbone (HFA-GSLs. In contrast, Abeta(1-40 did not interact with GSLs containing a nonhydroxylated fatty acid (NFA-GSLs. Cholesterol inhibited the interaction of Abeta(1-40 with HFA-GSLs, through dilution of the GSL in the monolayer, but rendered the initially inactive NFA-GSLs competent for Abeta(1-40 binding. Both crystallographic data and molecular dynamics simulations suggested that the active conformation of HFA-GSL involves a H-bond network that restricts the orientation of the sugar group of GSLs in a parallel orientation with respect to the membrane. This particular conformation is stabilized by the 2-OH group of the GSL. Correspondingly, the interaction of Abeta(1-40 with HFA-GSLs is strongly inhibited by NaF, an efficient competitor of H-bond formation. For NFA-GSLs, this is the OH group of cholesterol that constrains the glycolipid to adopt the active L-shape conformation compatible with sugar-aromatic CH-pi stacking interactions involving residue Y10 of Abeta(1-40. We conclude that cholesterol can either inhibit or facilitate membrane-Abeta interactions through fine tuning of glycosphingolipid conformation. These data shed some light on the complex molecular interplay between cell surface GSLs, cholesterol and Abeta peptides, and on the
Directory of Open Access Journals (Sweden)
Joaquin Aranda
2013-08-01
Full Text Available In this paper, we address the problem of determining the optimal geometric configuration of an acoustic sensor network that will maximize the angle-related information available for underwater target positioning. In the set-up adopted, a set of autonomous vehicles carries a network of acoustic units that measure the elevation and azimuth angles between a target and each of the receivers on board the vehicles. It is assumed that the angle measurements are corrupted by white Gaussian noise, the variance of which is distance-dependent. Using tools from estimation theory, the problem is converted into that of minimizing, by proper choice of the sensor positions, the trace of the inverse of the Fisher Information Matrix (also called the Cramer-Rao Bound matrix to determine the sensor configuration that yields the minimum possible covariance of any unbiased target estimator. It is shown that the optimal configuration of the sensors depends explicitly on the intensity of the measurement noise, the constraints imposed on the sensor configuration, the target depth and the probabilistic distribution that defines the prior uncertainty in the target position. Simulation examples illustrate the key results derived.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Splines and polynomial tools for flatness-based constrained motion planning
Suryawan, Fajar; De Doná, José; Seron, María
2012-08-01
This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.
Directory of Open Access Journals (Sweden)
Chuang Lin
2013-01-01
Full Text Available Different kernels cause various class discriminations owing to their different geometrical structures of the data in the feature space. In this paper, a method of kernel optimization by maximizing a measure of class separability in the empirical feature space with sparse representation-based classifier (SRC is proposed to solve the problem of automatically choosing kernel functions and their parameters in kernel learning. The proposed method first adopts a so-called data-dependent kernel to generate an efficient kernel optimization algorithm. Then, a constrained optimization function using general gradient descent method is created to find combination coefficients varied with the input data. After that, optimized kernel PCA (KOPCA is obtained via combination coefficients to extract features. Finally, the sparse representation-based classifier is used to perform pattern classification task. Experimental results on MSTAR SAR images show the effectiveness of the proposed method.
Particle swarm optimization based optimal bidding strategy in an ...
African Journals Online (AJOL)
user
compared with the Genetic Algorithm (GA) approach. Test results indicate that the proposed algorithm outperforms the Genetic. Algorithm approach with respect to total profit and convergence time. Keywords: Electricity Market, Market Clearing Price (MCP), Optimal bidding strategy, Particle Swarm Optimization (PSO).
Directory of Open Access Journals (Sweden)
Wuttinan Nunkaew
2013-12-01
Full Text Available At present, methods for solving the manufacturing cell formation with the assignment of duplicated machines contain many steps. Firstly, part families and machine cells are determined. Then, the incidence matrix of the cell formation is reconsidered for machine duplications to reduce the interaction between cells with the restriction of cost. These ways are difficult and complicated. Besides, consideration of machine setup cost should be done simultaneously with the decision making. In this paper, an effective lexicographic fuzzy multi–objective optimization model for manufacturing cell formation with a setup cost constrained of machine duplication is presented. Based on the perfect grouping concept, two crucial performance measures called exceptional elements and void elements are utilized in the proposed model. Lexicographic fuzzy goal programming is applied to solve this multi–objective model with setup cost constraint. So, the decision maker can easily solve the manufacturing cell formation and control the setup cost of machine duplication, simultaneously.
Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method.
Directory of Open Access Journals (Sweden)
Haiqing Yu
Full Text Available This paper presents a total variation (TV regularized reconstruction algorithm for 3D positron emission tomography (PET. The proposed method first employs the Fourier rebinning algorithm (FORE, rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS. Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF.
Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas
2017-08-01
The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.
Optimization-based controller design for rotorcraft
Tsing, N.-K.; Fan, M. K. H.; Barlow, J.; Tits, A. L.; Tischler, M. B.
1993-01-01
An optimization-based methodology for linear control system design is outlined by considering the design of a controller for a UH-60 rotorcraft in hover. A wide range of design specifications is taken into account: internal stability, decoupling between longitudinal and lateral motions, handling qualities, and rejection of windgusts. These specifications are investigated while taking into account physical limitations in the swashplate displacements and rates of displacement. The methodology crucially relies on user-machine interaction for tradeoff exploration.
Roine, Ulrika; Salmi, Juha; Roine, Timo; Wendt, Taina Nieminen-von; Leppämäki, Sami; Rintahaka, Pertti; Tani, Pekka; Leemans, Alexander; Sams, Mikko
2015-01-01
The aim of this study was to investigate potential differences in neural structure in individuals with Asperger syndrome (AS), high-functioning individuals with autism spectrum disorder (ASD). The main symptoms of AS are severe impairments in social interactions and restricted or repetitive patterns of behaviors, interests or activities. Diffusion weighted magnetic resonance imaging data were acquired for 14 adult males with AS and 19 age, sex and IQ-matched controls. Voxelwise group differences in fractional anisotropy (FA) were studied with tract-based spatial statistics (TBSS). Based on the results of TBSS, a tract-level comparison was performed with constrained spherical deconvolution (CSD)-based tractography, which is able to detect complex (for example, crossing) fiber configurations. In addition, to investigate the relationship between the microstructural changes and the severity of symptoms, we looked for correlations between FA and the Autism Spectrum Quotient (AQ), Empathy Quotient and Systemizing Quotient. TBSS revealed widely distributed local increases in FA bilaterally in individuals with AS, most prominent in the temporal part of the superior longitudinal fasciculus, corticospinal tract, splenium of corpus callosum, anterior thalamic radiation, inferior fronto-occipital fasciculus (IFO), posterior thalamic radiation, uncinate fasciculus and inferior longitudinal fasciculus (ILF). CSD-based tractography also showed increases in the FA in multiple tracts. However, only the difference in the left ILF was significant after a Bonferroni correction. These results were not explained by the complexity of microstructural organization, measured using the planar diffusion coefficient. In addition, we found a correlation between AQ and FA in the right IFO in the whole group. Our results suggest that there are local and tract-level abnormalities in white matter (WM) microstructure in our homogenous and carefully characterized group of adults with AS, most
Price-based Optimal Control of Electrical Power Systems
Energy Technology Data Exchange (ETDEWEB)
Jokic, A.
2007-09-10
The research presented in this thesis is motivated by the following issue of concern for the operation of future power systems: Future power systems will be characterized by significantly increased uncertainties at all time scales and, consequently, their behavior in time will be difficult to predict. In Chapter 2 we will present a novel explicit, dynamic, distributed feedback control scheme that utilizes nodal-prices for real-time optimal power balance and network congestion control. The term explicit means that the controller is not based on solving an optimization problem on-line. Instead, the nodal prices updates are based on simple, explicitly defined and easily comprehensible rules. We prove that the developed control scheme, which acts on the measurements from the current state of the system, always provide the correct nodal prices. In Chapter 3 we will develop a novel, robust, hybrid MPC control (model predictive controller) scheme for power balance control with hard constraints on line power flows and network frequency deviations. The developed MPC controller acts in parallel with the explicit controller from Chapter 2, and its task is to enforce the constraints during the transient periods following suddenly occurring power imbalances in the system. In Chapter 4 the concept of autonomous power networks will be presented as a concise formulation to deal with economic, technical and reliability issues in power systems with a large penetration of distributed generating units. With autonomous power networks as new market entities, we propose a novel operational structure of ancillary service markets. In Chapter 5 we will consider the problem of controlling a general linear time-invariant dynamical system to an economically optimal operating point, which is defined by a multiparametric constrained convex optimization problem related with the steady-state operation of the system. The parameters in the optimization problem are values of the exogenous inputs to
Energy Technology Data Exchange (ETDEWEB)
Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles
Figure-of-Merit-Based Area-Constrained Design of Differential Amplifiers
Directory of Open Access Journals (Sweden)
Alpana Agarwal
2008-01-01
Full Text Available A new methodology based on the concept of figure of merit under area constraints is described for designing optimum performance differential amplifiers. First a figure of merit is introduced that includes the three performance parameters, namely, input-referred noise, differential dc gain, and unity-gain bandwidth. Expressions for these parameters have been derived analytically and finally arrived at an expression for the figure of merit. Next it is shown how these performance parameters vary with the relative allocation of the total available area between the input and load transistors. The figure of merit peaks at a certain value of relative area allocation in the range of 60% to 80% of the available area to the input transistors. The peak value of figure of merit is a function of area. However, it is independent of biasing current (and, therefore, power consumption subject to the minimum current (and, therefore, a minimum power required to keep all the transistors biased in the saturation region. The peak figure of merit and minimum power required to achieve the peak figure of merit are also plotted as a function of area. These analyses help in synthesizing optimal differential amplifier circuit designs under area constraints.
Smell Detection Agent Based Optimization Algorithm
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
PSO Based Optimization of Testing and Maintenance Cost in NPPs
Directory of Open Access Journals (Sweden)
Qiang Chou
2014-01-01
Full Text Available Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT, maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.
Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel
2005-01-01
The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.
Artifact reduction in short-scan CBCT by use of optimization-based reconstruction
Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan
2017-01-01
Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
Energy Technology Data Exchange (ETDEWEB)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn [Institute of Natural Sciences, Department of Mathematics, and MOE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China); Lin, Guang, E-mail: lin491@purdue.edu [Department of Mathematics, School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States); Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Yang, Xu, E-mail: xuyang@math.ucsb.edu [Department of Mathematics, University of California, Santa Barbara, CA 93106 (United States)
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by three steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.
Improved Biogeography-Based Optimization Based on Affinity Propagation
Directory of Open Access Journals (Sweden)
Zhihao Wang
2016-07-01
Full Text Available To improve the search ability of biogeography-based optimization (BBO, this work proposed an improved biogeography-based optimization based on Affinity Propagation. We introduced the Memetic framework to the BBO algorithm, and used the simulated annealing algorithm as the local search strategy. MBBO enhanced the exploration with the Affinity Propagation strategy to improve the transfer operation of the BBO algorithm. In this work, the MBBO algorithm was applied to IEEE Congress on Evolutionary Computation (CEC 2015 benchmarks optimization problems to conduct analytic comparison with the first three winners of the CEC 2015 competition. The results show that the MBBO algorithm enhances the exploration, exploitation, convergence speed and solution accuracy and can emerge as the best solution-providing algorithm among the competing algorithms.
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS
Directory of Open Access Journals (Sweden)
Constantin D. STANESCU
2016-05-01
Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
National Research Council Canada - National Science Library
Dennis, John E; El-Alem, Mahmoud; Maciel, Maria C
1995-01-01
.... The normal Component need not be computed accurately. The theory requires a quasi-normal component to satisfy a fraction of Cauchy decrease condition on the quadratic model of the linearized constraints...
Directory of Open Access Journals (Sweden)
Ning Dong
2014-01-01
functions are executed, and comparisons with five state-of-the-art algorithms are made. The results illustrate that the proposed algorithm is competitive with and in some cases superior to the compared ones in terms of the quality, efficiency, and the robustness of the obtained results.
GPU-based ultrafast IMRT plan optimization.
Men, Chunhua; Gu, Xuejun; Choi, Dongju; Majumdar, Amitava; Zheng, Ziyi; Mueller, Klaus; Jiang, Steve B
2009-11-07
The widespread adoption of on-board volumetric imaging in cancer radiotherapy has stimulated research efforts to develop online adaptive radiotherapy techniques to handle the inter-fraction variation of the patient's geometry. Such efforts face major technical challenges to perform treatment planning in real time. To overcome this challenge, we are developing a supercomputing online re-planning environment (SCORE) at the University of California, San Diego (UCSD). As part of the SCORE project, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) optimization algorithm on graphics processing units (GPUs). We adopt a penalty-based quadratic optimization model, which is solved by using a gradient projection method with Armijo's line search rule. Our optimization algorithm has been implemented in CUDA for parallel GPU computing as well as in C for serial CPU computing for comparison purpose. A prostate IMRT case with various beamlet and voxel sizes was used to evaluate our implementation. On an NVIDIA Tesla C1060 GPU card, we have achieved speedup factors of 20-40 without losing accuracy, compared to the results from an Intel Xeon 2.27 GHz CPU. For a specific nine-field prostate IMRT case with 5 x 5 mm(2) beamlet size and 2.5 x 2.5 x 2.5 mm(3) voxel size, our GPU implementation takes only 2.8 s to generate an optimal IMRT plan. Our work has therefore solved a major problem in developing online re-planning technologies for adaptive radiotherapy.
Model-based dynamic control and optimization of gas networks
Energy Technology Data Exchange (ETDEWEB)
Hofsten, Kai
2001-07-01
This work contributes to the research on control, optimization and simulation of gas transmission systems to support the dispatch personnel at gas control centres for the decision makings in the daily operation of the natural gas transportation systems. Different control and optimization strategies have been studied. The focus is on the operation of long distance natural gas transportation systems. Stationary optimization in conjunction with linear model predictive control using state space models is proposed for supply security, the control of quality parameters and minimization of transportation costs for networks offering transportation services. The result from the stationary optimization together with a reformulation of a simplified fluid flow model formulates a linear dynamic optimization model. This model is used in a finite time control and state constrained linear model predictive controller. The deviation from the control and the state reference determined from the stationary optimization is penalized quadratically. Because of the time varying status of infrastructure, the control space is also generally time varying. When the average load is expected to change considerably, a new stationary optimization is performed, giving a new state and control reference together with a new dynamic model that is used for both optimization and state estimation. Another proposed control strategy is a control and output constrained nonlinear model predictive controller for the operation of gas transmission systems. Here, the objective is also the security of the supply, quality control and minimization of transportation costs. An output vector is defined, which together with a control vector are both penalized quadratically from their respective references in the objective function. The nonlinear model predictive controller can be combined with a stationary optimization. At each sampling instant, a non convex nonlinear programming problem is solved giving a local minimum
Exact methods for time constrained routing and related scheduling problems
DEFF Research Database (Denmark)
Kohl, Niklas
1995-01-01
This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set of custo......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...
Nguyen, Hoai-Nam
2014-01-01
A comprehensive development of interpolating control, this monograph demonstrates the reduced computational complexity of a ground-breaking technique compared with the established model predictive control. The text deals with the regulation problem for linear, time-invariant, discrete-time uncertain dynamical systems having polyhedral state and control constraints, with and without disturbances, and under state or output feedback. For output feedback a non-minimal state-space representation is used with old inputs and outputs as state variables. Constrained Control of Uncertain, Time-Varying, Discrete-time Systems details interpolating control in both its implicit and explicit forms. In the former at most two linear-programming or one quadratic-programming problem are solved on-line at each sampling instant to yield the value of the control variable. In the latter the control law is shown to be piecewise affine in the state, and so the state space is partitioned into polyhedral cells so that at each sampling ...
Directory of Open Access Journals (Sweden)
Xinbin Li
2017-12-01
Full Text Available Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs. However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.
Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-12-21
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
Giva, Karen R N; Duma, Sinegugu E
2015-08-31
Problem-based learning (PBL) was introduced in Malawi in 2002 in order to improve the nursing education system and respond to the acute nursing human resources shortage. However, its implementation has been very slow throughout the country. The objectives of the study were to explore and describe the goals that were identified by the college to facilitate the implementation of PBL, the resources of the organisation that facilitated the implementation of PBL, the factors related to sources of students that facilitated the implementation of PBL, and the influence of the external system of the organisation on facilitating the implementation of PBL, and to identify critical success factors that could guide the implementation of PBL in nursing education in Malawi. This is an ethnographic, exploratory and descriptive qualitative case study. Purposive sampling was employed to select the nursing college, participants and documents for review.Three data collection methods, including semi-structured interviews, participant observation and document reviews, were used to collect data. The four steps of thematic analysis were used to analyse data from all three sources. Four themes and related subthemes emerged from the triangulated data sources. The first three themes and their subthemes are related to the characteristics related to successful implementation of PBL in a human resource-constrained nursing college, whilst the last theme is related to critical success factors that contribute to successful implementation of PBL in a human resource-constrained country like Malawi. This article shows that implementation of PBL is possible in a human resource-constrained country if there is political commitment and support.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Doubly Constrained Robust Blind Beamforming Algorithm
Directory of Open Access Journals (Sweden)
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Reliability-Based Optimization of Structural Elements
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
In this paper structural elements from an optimization point of view are considered, i.e. only the geometry of a structural element is optimized. Reliability modelling of the structural element is discussed both from an element point of view and from a system point of view. The optimization...
Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode
Directory of Open Access Journals (Sweden)
Kang Zhou
2016-01-01
Full Text Available Service routes optimization (SRO of pallet service center should meet customers’ demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP and Chinese postman problem (CPP, but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case.
An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits
Directory of Open Access Journals (Sweden)
R. de J. Portillo-Vélez
2015-01-01
Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.
Global stability-based design optimization of truss structures using ...
Indian Academy of Sciences (India)
Home; Journals; Sadhana; Volume 38; Issue 1. Global stability-based design optimization of truss structures using multiple objectives. Tugrul Talaslioglu ... Furthermore, a pure pareto-ranking based multi-objective optimization model is employed for the design optimization of the truss structure with multiple objectives.
An optimization method for metamorphic mechanisms based on multidisciplinary design optimization
Directory of Open Access Journals (Sweden)
Zhang Wuxiang
2014-12-01
Full Text Available The optimization of metamorphic mechanisms is different from that of the conventional mechanisms for its characteristics of multi-configuration. There exist complex coupled design variables and constraints in its multiple different configuration optimization models. To achieve the compatible optimized results of these coupled design variables, an optimization method for metamorphic mechanisms is developed in the paper based on the principle of multidisciplinary design optimization (MDO. Firstly, the optimization characteristics of the metamorphic mechanism are summarized distinctly by proposing the classification of design variables and constraints as well as coupling interactions among its different configuration optimization models. Further, collaborative optimization technique which is used in MDO is adopted for achieving the overall optimization performance. The whole optimization process is then proposed by constructing a two-level hierarchical scheme with global optimizer and configuration optimizer loops. The method is demonstrated by optimizing a planar five-bar metamorphic mechanism which has two configurations, and results show that it can achieve coordinated optimization results for the same parameters in different configuration optimization models.
Logic-based methods for optimization combining optimization and constraint satisfaction
Hooker, John
2011-01-01
A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible
Interactive Reliability-Based Optimization of Structural Systems
DEFF Research Database (Denmark)
Pedersen, Claus
In order to introduce the basic concepts within the field of reliability-based structural optimization problems, this chapter is devoted to a brief outline of the basic theories. Therefore, this chapter is of a more formal nature and used as a basis for the remaining parts of the thesis. In section...... 2.2 a general non-linear optimization problem and corresponding terminology are presented whereupon optimality conditions and the standard form of an iterative optimization algorithm are outlined. Subsequently, the special properties and characteristics concerning structural optimization problems...... the reliability-based structural optimization (RBSO) problem is formulated and described....
Optimizing a Water Simulation based on Wavefront Parameter Optimization
Lundgren, Martin
2017-01-01
DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...
SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET
Energy Technology Data Exchange (ETDEWEB)
Chen, L; Zhou, Z; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)
2016-06-15
Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directly connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.
Constrained superfields in supergravity
Energy Technology Data Exchange (ETDEWEB)
Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)
2016-02-16
We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.
Energy Technology Data Exchange (ETDEWEB)
Sabau, Adrian S [ORNL; Mirmiran, Seyed [Fiat Chrysler Automobiles North America; Glaspie, Christopher [Fiat Chrysler Automobiles North America; Li, Shimin [Worcester Polytechnic Institute (WPI), MA; Apelian, Diran [Worcester Polytechnic Institute (WPI), MA; Shyam, Amit [ORNL; Haynes, James A [ORNL; Rodriguez, Andres [Nemak, Garza Garcia, N.L., Mexico
2017-01-01
Hot-tearing is a major casting defect that is often difficult to characterize, especially for multicomponent Al alloys used for cylinder head castings. The susceptibility of multicomponent Al-Cu alloys to hot-tearing during permanent mold casting was investigated using a constrained permanent mold in which the load and displacement was measured. The experimental results for hot tearing susceptibility are compared with those obtained from a hot-tearing criterion based temperature range evaluated at fraction solids of 0.87 and 0.94. The Cu composition was varied from approximately 5 to 8 pct. (weight). Casting experiments were conducted without grain refining. The measured load during casting can be used to indicate the severity of hot tearing. However, when small hot-tears are present, the load variation cannot be used to detect and assess hot-tearing susceptibility.
Prakash, Punit; Chen, Xin; Wootton, Jeffery; Pouliot, Jean; Hsu, I.-Chow; Diederich, Chris J.
2009-02-01
A 3D optimization-based thermal treatment planning platform has been developed for the application of catheter-based ultrasound hyperthermia in conjunction with high dose rate (HDR) brachytherapy for treating advanced pelvic tumors. Optimal selection of applied power levels to each independently controlled transducer segment can be used to conform and maximize therapeutic heating and thermal dose coverage to the target region, providing significant advantages over current hyperthermia technology and improving treatment response. Critical anatomic structures, clinical target outlines, and implant/applicator geometries were acquired from sequential multi-slice 2D images obtained from HDR treatment planning and used to reconstruct patient specific 3D biothermal models. A constrained optimization algorithm was devised and integrated within a finite element thermal solver to determine a priori the optimal applied power levels and the resulting 3D temperature distributions such that therapeutic heating is maximized within the target, while placing constraints on maximum tissue temperature and thermal exposure of surrounding non-targeted tissue. This optimizationbased treatment planning and modeling system was applied on representative cases of clinical implants for HDR treatment of cervix and prostate to evaluate the utility of this planning approach. The planning provided significant improvement in achievable temperature distributions for all cases, with substantial increase in T90 and thermal dose (CEM43T90) coverage to the hyperthermia target volume while decreasing maximum treatment temperature and reducing thermal dose exposure to surrounding non-targeted tissues and thermally sensitive rectum and bladder. This optimization based treatment planning platform with catheter-based ultrasound applicators is a useful tool that has potential to significantly improve the delivery of hyperthermia in conjunction with HDR brachytherapy. The planning platform has been extended
Warehouse Optimization Model Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Guofeng Qin
2013-01-01
Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.
Locally-Constrained Region-Based Methods for DW-MRI Segmentation
National Research Council Canada - National Science Library
Melonakos, John; Kubicki, Marek; Niethammer, Marc; Miller, James V; Mohan, Vandana; Tannenbaum, Allen
2007-01-01
.... In this work, we show results for segmenting the cingulum bundle. Finally, we explain how this approach and extensions thereto overcome a major problem that typical region-based flows experience when attempting to segment neural fiber bundles.
Architecture Synthesis for Cost-Constrained Fault-Tolerant Flow-based Biochips
DEFF Research Database (Denmark)
Eskesen, Morten Chabert; Pop, Paul; Potluri, Seetal
2016-01-01
In this paper, we are interested in the synthesis of fault-tolerant architectures for flow-based microfluidic biochips, which use microvalves and channels to run biochemical applications. The growth rate of device integration in flow-based microfluidic biochips is scaling faster than Moore's law........ The proposed algorithm has been evaluated using several benchmarks and compared to the results of a Simulated Annealing metaheuristic....
A Net Energy-based Analysis for a Climate-constrained Sustainable Energy Transition
Sgouridis, Sgouris; Bardi, Ugo; Csala, Denes
2015-01-01
The transition from a fossil-based energy economy to one based on renewable energy is driven by the double challenge of climate change and resource depletion. Building a renewable energy infrastructure requires an upfront energy investment that subtracts from the net energy available to society. This investment is determined by the need to transition to renewable energy fast enough to stave off the worst consequences of climate change and, at the same time, maintain a sufficient net energy fl...
Topology Optimization of Metamaterial-Based Electrically Small Antennas
DEFF Research Database (Denmark)
Erentok, Aycan; Sigmund, Ole
2007-01-01
A topology optimized metamaterial-based electrically small antenna configuration that is independent of a specific spherical and/or cylindrical metamaterial shell design is demonstrated. Topology optimization is shown to provide the optimal value and placement of a given ideal metamaterial in space...
Directory of Open Access Journals (Sweden)
Michael P. J. Mahenge
2014-12-01
Full Text Available The advancement in Information and Communication Technologies (ICTs has brought opportunities for the development of Smart Cities. The Smart City uses ICT to enhance performance and wellbeing, to reduce costs and resource consumption, and to engage more effectively and actively with its citizens. In particular, the education sector is adopting new ways of learning in Higher Education Institutions (HEIs through e-learning systems. While these opportunities exist, e-learning content delivery and accessibility in third world countries like Tanzania is still a challenge due to resource and network constrained environments. The challenges include: high cost of bandwidth connection and usage; high dependency on the Internet; limited mobility and portability features; inaccessibility during the offline period and shortage of ICT facilities. In this paper, we investigate the use of mobile technology to sustainably support education and skills development particularly in developing countries. Specifically, we propose a Cost-effective Mobile Based Learning Content Delivery system for resource and network constrained environments. This system can be applied to cost-effectively broaden and support education in many cities around the world, which are approaching the 'Smart City' concept in their own way, even with less available technology infrastructure. Therefore, the proposed solution has the potential to reduce the cost of the bandwidth usage, and cut down the server workload and the Internet usage overhead by synchronizing learning contents from some remote server to a local database in the user’s device for offline use. It will also improve the quality of experience and participation of learners as well as facilitate mobility and portability in learning activities, which also supports the all-compassing learning experience in a Smart City.
A Geographic Optimization Approach to Coast Guard Ship Basing
2015-06-01
forward in this area. 14. SUBJECT TERMS coast guard, ship basing, ship allocation, geographic optimization 15. NUMBER OF PAGES 69 16. PRICE CODE 17...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS A GEOGRAPHIC OPTIMIZATION APPROACH TO COAST GUARD SHIP BASING by Mehmet Ali Gençay June 2015...AND SUBTITLE A GEOGRAPHIC OPTIMIZATION APPROACH TO COAST GUARD SHIP BASING 5. FUNDING NUMBERS 6. AUTHOR(S) Mehmet Ali Gençay 7. PERFORMING ORGANIZATION
A Net Energy-based Analysis for a Climate-constrained Sustainable Energy Transition
Sgouridis, Sgouris; Csala, Denes
2015-01-01
The transition from a fossil-based energy economy to one based on renewable energy is driven by the double challenge of climate change and resource depletion. Building a renewable energy infrastructure requires an upfront energy investment that subtracts from the net energy available to society. This investment is determined by the need to transition to renewable energy fast enough to stave off the worst consequences of climate change and, at the same time, maintain a sufficient net energy flow to sustain the world's economy and population. We show that a feasible transition pathway requires that the rate of investment in renewable energy should accelerate approximately by an order of magnitude if we are to stay within the range of IPCC recommendations.
Directory of Open Access Journals (Sweden)
Fang Wang
2017-05-01
Full Text Available The tracking control problem of a flexible air-breathing hypersonic vehicle subjects to aerodynamic parameter uncertainty and input constraint is investigated by combining nonlinear disturbance observer and dynamic surface control. To design controller simply, a control-oriented model is firstly derived and divided into two subsystems, velocity subsystem and altitude subsystem based on the engineering backgrounds of flexible air-breathing hypersonic vehicle. In every subsystem, compounded disturbances are included to consider aerodynamic uncertainty and the effect of the flexible modes. Then, disturbance observer is not only used to handle the compounded disturbance but also to handle the input constraint, where the estimation error converges to a random small region through appropriately choosing the observer parameters. To sequel, the disturbance observer–based robust control scheme and the disturbance observer-based dynamic surface control scheme are developed for the velocity subsystem and altitude subsystem, respectively. Besides, novel filters are designed to alleviate the problem of “explosion of terms” induced by backstepping method. On the basis of Lyapunov stability theory, the presented control scheme can assure that tracking error converges to an arbitrarily small neighborhood around zero by rigorous theoretical analysis. At last, simulation result shows the effectiveness of the presented control method.
Optimization for manufacturing system based on Pheromone
Lei Wang; Dunbing Tang
2011-01-01
A new optimization approach, called pheromone, which comes from the collective behavior of ant colonies for food foraging is proposed to optimize task allocation. These ants spread pheromone information and make global information available locally; thus, an ant agent only needs to observe its local environment in order to account for nonlocal concerns in its decisions. This approach has the capacity for task allocation model to automatically find efficient routing paths for processing orders...
The effect of using a robust optimality criterion in model based adaptive optimization.
Strömberg, Eric A; Hooker, Andrew C
2017-08-01
Optimizing designs using robust (global) optimality criteria has been shown to be a more flexible approach compared to using local optimality criteria. Additionally, model based adaptive optimal design (MBAOD) may be less sensitive to misspecification in the prior information available at the design stage. In this work, we investigate the influence of using a local (lnD) or a robust (ELD) optimality criterion for a MBAOD of a simulated dose optimization study, for rich and sparse sampling schedules. A stopping criterion for accurate effect prediction is constructed to determine the endpoint of the MBAOD by minimizing the expected uncertainty in the effect response of the typical individual. 50 iterations of the MBAODs were run using the MBAOD R-package, with the concentration from a one-compartment first-order absorption pharmacokinetic model driving the population effect response in a sigmoidal EMAX pharmacodynamics model. The initial cohort consisted of eight individuals in two groups and each additional cohort added two individuals receiving a dose optimized as a discrete covariate. The MBAOD designs using lnD and ELD optimality with misspecified initial model parameters were compared by evaluating the efficiency relative to an lnD-optimal design based on the true parameter values. For the explored example model, the MBAOD using ELD-optimal designs converged quicker to the theoretically optimal lnD-optimal design based on the true parameters for both sampling schedules. Thus, using a robust optimality criterion in MBAODs could reduce the number of adaptations required and improve the practicality of adaptive trials using optimal design.
Cognitive radio adaptation for power consumption minimization using biogeography-based optimization
Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin
2016-12-01
Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.
Interpretable exemplar-based shape classification using constrained sparse linear models.
Sigurdsson, Gunnar A; Yang, Zhen; Tran, Trac D; Prince, Jerry L
2015-02-01
Many types of diseases manifest themselves as observable changes in the shape of the affected organs. Using shape classification, we can look for signs of disease and discover relationships between diseases. We formulate the problem of shape classification in a holistic framework that utilizes a lossless scalar field representation and a non-parametric classification based on sparse recovery. This framework generalizes over certain classes of unseen shapes while using the full information of the shape, bypassing feature extraction. The output of the method is the class whose combination of exemplars most closely approximates the shape, and furthermore, the algorithm returns the most similar exemplars along with their similarity to the shape, which makes the result simple to interpret. Our results show that the method offers accurate classification between three cerebellar diseases and controls in a database of cerebellar ataxia patients. For reproducible comparison, promising results are presented on publicly available 2D datasets, including the ETH-80 dataset where the method achieves 88.4% classification accuracy.
Topology optimization based on the harmony search method
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)
2017-06-15
A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.
GAMBL, genetic algorithm optimization of memory-based WSD
Decadt, Bart; Hoste, Veronique; Daelemans, Walter; van den Bosch, Antal
2004-01-01
GAMBL is a word expert approach to WSD in which each word expert is trained using memory based learning. Joint feature selection and algorithm parameter optimization are achieved with a genetic algorithm (GA). We use a cascaded classifier approach in which the GA optimizes local context features and the output of a separate keyword classifier (rather than also optimizing the keyword features together with the local context features). A further innovation on earlier versions of memory based WS...
Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis
Sen Zhang; Yongquan Zhou
2015-01-01
One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO), inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving cluster...
Optimization based automated curation of metabolic reconstructions
Directory of Open Access Journals (Sweden)
Maranas Costas D
2007-06-01
Full Text Available Abstract Background Currently, there exists tens of different microbial and eukaryotic metabolic reconstructions (e.g., Escherichia coli, Saccharomyces cerevisiae, Bacillus subtilis with many more under development. All of these reconstructions are inherently incomplete with some functionalities missing due to the lack of experimental and/or homology information. A key challenge in the automated generation of genome-scale reconstructions is the elucidation of these gaps and the subsequent generation of hypotheses to bridge them. Results In this work, an optimization based procedure is proposed to identify and eliminate network gaps in these reconstructions. First we identify the metabolites in the metabolic network reconstruction which cannot be produced under any uptake conditions and subsequently we identify the reactions from a customized multi-organism database that restores the connectivity of these metabolites to the parent network using four mechanisms. This connectivity restoration is hypothesized to take place through four mechanisms: a reversing the directionality of one or more reactions in the existing model, b adding reaction from another organism to provide functionality absent in the existing model, c adding external transport mechanisms to allow for importation of metabolites in the existing model and d restore flow by adding intracellular transport reactions in multi-compartment models. We demonstrate this procedure for the genome- scale reconstruction of Escherichia coli and also Saccharomyces cerevisiae wherein compartmentalization of intra-cellular reactions results in a more complex topology of the metabolic network. We determine that about 10% of metabolites in E. coli and 30% of metabolites in S. cerevisiae cannot carry any flux. Interestingly, the dominant flow restoration mechanism is directionality reversals of existing reactions in the respective models. Conclusion We have proposed systematic methods to identify and
Process optimization of friction stir welding based on thermal models
DEFF Research Database (Denmark)
Larsen, Anders Astrup
2010-01-01
This thesis investigates how to apply optimization methods to numerical models of a friction stir welding process. The work is intended as a proof-of-concept using different methods that are applicable to models of high complexity, possibly with high computational cost, and without the possibility...... information of the high-fidelity model. The optimization schemes are applied to stationary thermal models of differing complexity of the friction stir welding process. The optimization problems considered are based on optimizing the temperature field in the workpiece by finding optimal translational speed...
PARTICLE SWARM OPTIMIZATION BASED OF THE MAXIMUM ...
African Journals Online (AJOL)
2010-06-30
Jun 30, 2010 ... systems from one hand and because of the instantaneous change of both insulation and temperature ... dealing accurately with these optimization problems and to overcome the incapacities of the traditional ... Series resistance RS, which gives a more accurate shape between the maximum power point ...
Decomposition Techniques and Effective Algorithms in Reliability-Based Optimization
DEFF Research Database (Denmark)
Enevoldsen, I.; Sørensen, John Dalsgaard
1995-01-01
The common problem of an extensive number of limit state function calculations in the various formulations and applications of reliability-based optimization is treated. It is suggested to use a formulation based on decomposition techniques so the nested two-level optimization problem can be solv...
Directory of Open Access Journals (Sweden)
Zong-Sheng Wu
2015-01-01
Full Text Available Teaching-learning-based optimization (TLBO algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.
Wu, Zong-Sheng; Fu, Wei-Ping; Xue, Ru
2015-01-01
Teaching-learning-based optimization (TLBO) algorithm is proposed in recent years that simulates the teaching-learning phenomenon of a classroom to effectively solve global optimization of multidimensional, linear, and nonlinear problems over continuous spaces. In this paper, an improved teaching-learning-based optimization algorithm is presented, which is called nonlinear inertia weighted teaching-learning-based optimization (NIWTLBO) algorithm. This algorithm introduces a nonlinear inertia weighted factor into the basic TLBO to control the memory rate of learners and uses a dynamic inertia weighted factor to replace the original random number in teacher phase and learner phase. The proposed algorithm is tested on a number of benchmark functions, and its performance comparisons are provided against the basic TLBO and some other well-known optimization algorithms. The experiment results show that the proposed algorithm has a faster convergence rate and better performance than the basic TLBO and some other algorithms as well.
Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines
DEFF Research Database (Denmark)
2006-01-01
Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure...
Reliability-Based Optimization and Optimal Reliability Level of Offshore Wind Turbines
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Tarp-Johansen, N.J.
2005-01-01
Different formulations relevant for the reliability-based optimization of offshore wind turbines are presented, including different reconstruction policies in case of failure. Illustrative examples are presented and, as a part of the results, optimal reliability levels for the different failure...
Ant colony optimization-based firewall anomaly mitigation engine
National Research Council Canada - National Science Library
Penmatsa, Ravi Kiran Varma; Vatsavayi, Valli Kumari; Samayamantula, Srinivas Kumar
2016-01-01
... to the organization’s framed security policy. This study proposes an ant colony optimization (ACO)-based anomaly resolution and reordering of firewall rules called ACO-based firewall anomaly mitigation engine...
Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.
2015-12-01
The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.
Seismic-Reliability-Based Optimal Layout of a Water Distribution Network
Directory of Open Access Journals (Sweden)
Do Guen Yoo
2016-02-01
Full Text Available We proposed an economic, cost-constrained optimal design of a water distribution system (WDS that maximizes seismic reliability while satisfying pressure constraints. The model quantifies the seismic reliability of a WDS through a series of procedures: stochastic earthquake generation, seismic intensity attenuation, determination of the pipe failure status (normal, leakage, and breakage, pipe failure modeling in hydraulic simulation, and negative pressure treatment. The network’s seismic reliability is defined as the ratio of the available quantity of water to the required water demand under stochastic earthquakes. The proposed model allows no pipe option in decisions, making it possible to identify seismic-reliability-based optimal layout for a WDS. The model takes into account the physical impact of earthquake events on the WDS, which ultimately affects the network’s boundary conditions (e.g., failure level of pipes. A well-known benchmark network, the Anytown network, is used to demonstrate the proposed model. The network’s optimal topology and pipe layouts are determined from a series of optimizations. The results show that installing large redundant pipes degrades the system’s seismic reliability because the pipes will cause a large rupture opening under failure. Our model is a useful tool to find the optimal pipe layout that maximizes system reliability under earthquakes.
Directory of Open Access Journals (Sweden)
Bai Shiye
2016-05-01
Full Text Available An objective function defined by minimum compliance of topology optimization for 3D continuum structure was established to search optimal material distribution constrained by the predetermined volume restriction. Based on the improved SIMP (solid isotropic microstructures with penalization model and the new sensitivity filtering technique, basic iteration equations of 3D finite element analysis were deduced and solved by optimization criterion method. All the above procedures were written in MATLAB programming language, and the topology optimization design examples of 3D continuum structure with reserved hole were examined repeatedly by observing various indexes, including compliance, maximum displacement, and density index. The influence of mesh, penalty factors, and filter radius on the topology results was analyzed. Computational results showed that the finer or coarser the mesh number was, the larger the compliance, maximum displacement, and density index would be. When the filtering radius was larger than 1.0, the topology shape no longer appeared as a chessboard problem, thus suggesting that the presented sensitivity filtering method was valid. The penalty factor should be an integer because iteration steps increased greatly when it is a noninteger. The above modified variable density method could provide technical routes for topology optimization design of more complex 3D continuum structures in the future.
Directory of Open Access Journals (Sweden)
J. Fang
1998-01-01
Full Text Available An approach to the optimum design of structures, in which uncertainties with a fuzzy nature in the magnitude of the loads are considered, is proposed in this study. The optimization process under fuzzy loads is transformed into a fuzzy optimization problem based on the notion of Werners' maximizing set by defining membership functions of the objective function and constraints. In this paper, Werner's maximizing set is defined using the results obtained by first conducting an optimization through anti-optimization modeling of the uncertain loads. An example of a ten-bar truss is used to illustrate the present optimization process. The results are compared with those yielded by other optimization methods.
CFD Optimization on Network-Based Parallel Computer System
Cheung, Samson H.; VanDalsem, William (Technical Monitor)
1994-01-01
Combining multiple engineering workstations into a network-based heterogeneous parallel computer allows application of aerodynamic optimization with advance computational fluid dynamics codes, which is computationally expensive in mainframe supercomputer. This paper introduces a nonlinear quasi-Newton optimizer designed for this network-based heterogeneous parallel computer on a software called Parallel Virtual Machine. This paper will introduce the methodology behind coupling a Parabolized Navier-Stokes flow solver to the nonlinear optimizer. This parallel optimization package has been applied to reduce the wave drag of a body of revolution and a wing/body configuration with results of 5% to 6% drag reduction.
Zhuyan Zhang; Hongqing Zhu; Xuan Tao
2017-07-01
The demand for automatically recognizing medical images for screening, reference and management is growing faster than ever. Missing data phenomenon in medical image applications is common existence, and it could be inevitable. In this paper, we have addressed the problem of recognizing medical images with missing-features via Gaussian mixture model (GMM)-based approach. Since training a GMM by directly using high-dimensional feature vectors will result in instability, we have proposed a novel strategy to train the GMM from the corresponding reduced-dimensional one. The proposed method contains training and test phases. The former contains feature extraction, graph constrained nonnegative matrix factorization (NMF), GMM training, and the alternating expectation conditional maximization (AECM) for extending the reduced-dimensional GMM. In test phase, two methods, marginalizing GMM using Bayesian decision (MGBD) and conditional mean imputation (CMI), are applied to impute missing-features. Posterior probability of test images is calculated to identify objects. Experimental results on three real datasets demonstrate the feasibility and efficiency of the proposed scheme.
Rong, Qiangqiang; Cai, Yanpeng; Chen, Bing; Yue, Wencong; Yin, Xin'an; Tan, Qian
2017-02-15
In this research, an export coefficient based dual inexact two-stage stochastic credibility constrained programming (ECDITSCCP) model was developed through integrating an improved export coefficient model (ECM), interval linear programming (ILP), fuzzy credibility constrained programming (FCCP) and a fuzzy expected value equation within a general two stage programming (TSP) framework. The proposed ECDITSCCP model can effectively address multiple uncertainties expressed as random variables, fuzzy numbers, pure and dual intervals. Also, the model can provide a direct linkage between pre-regulated management policies and the associated economic implications. Moreover, the solutions under multiple credibility levels can be obtained for providing potential decision alternatives for decision makers. The proposed model was then applied to identify optimal land use structures for agricultural NPS pollution mitigation in a representative upstream subcatchment of the Miyun Reservoir watershed in north China. Optimal solutions of the model were successfully obtained, indicating desired land use patterns and nutrient discharge schemes to get a maximum agricultural system benefits under a limited discharge permit. Also, numerous results under multiple credibility levels could provide policy makers with several options, which could help get an appropriate balance between system benefits and pollution mitigation. The developed ECDITSCCP model can be effectively applied to addressing the uncertain information in agricultural systems and shows great applicability to the land use adjustment for agricultural NPS pollution mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal low thrust-based rendezvous maneuvers
Gonzalo Gomez, Juan Luis; Bombardelli, Claudio
2015-01-01
The minimum-time, low-constant-thrust, same circular orbit rendezvous problem is studied using a relative motion description of the system dynamics. The resulting Optimal Control Problem in the thrust orientation angle is formulated using both the Direct and Indirect methods. An extensive set of test cases is numerically solved with the former, while perturbation techniques applied to the later allow to obtain several approximate solutions and provide a greater insight on the underlying physi...
Dynamic Pricing through Sampling Based Optimization
Lobel, Ruben; Perakis, Georgia
2011-01-01
In this paper we develop an approach to dynamic pricing that combines ideas from data-driven and robust optimization to address the uncertain and dynamic aspects of the problem. In our setting, a firm off ers multiple products to be sold over a fixed discrete time horizon. Each product sold consumes one or more resources, possibly sharing the same resources among di fferent products. The firm is given a fixed initial inventory of these resources and cannot replenish this inventory during ...
Optimal trajectories based on linear equations
Carter, Thomas E.
1990-01-01
The Principal results of a recent theory of fuel optimal space trajectories for linear differential equations are presented. Both impulsive and bounded-thrust problems are treated. A new form of the Lawden Primer vector is found that is identical for both problems. For this reason, starting iteratives from the solution of the impulsive problem are highly effective in the solution of the two-point boundary-value problem associated with bounded thrust. These results were applied to the problem of fuel optimal maneuvers of a spacecraft near a satellite in circular orbit using the Clohessy-Wiltshire equations. For this case two-point boundary-value problems were solved using a microcomputer, and optimal trajectory shapes displayed. The results of this theory can also be applied if the satellite is in an arbitrary Keplerian orbit through the use of the Tschauner-Hempel equations. A new form of the solution of these equations has been found that is identical for elliptical, parabolic, and hyperbolic orbits except in the way that a certain integral is evaluated. For elliptical orbits this integral is evaluated through the use of the eccentric anomaly. An analogous evaluation is performed for hyperbolic orbits.
PTV-based IMPT optimization incorporating planning risk volumes vs robust optimization
Energy Technology Data Exchange (ETDEWEB)
Liu Wei; Li Xiaoqiang; Zhu, Ron. X.; Mohan, Radhe [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Frank, Steven J. [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Li Yupeng [Varian Medical Systems, Inc., Palo Alto, California 94304 (United States)
2013-02-15
Purpose: Robust optimization leads to intensity-modulated proton therapy (IMPT) plans that are less sensitive to uncertainties and superior in terms of organs-at-risk (OARs) sparing, target dose coverage, and homogeneity compared to planning target volume (PTV)-based optimized plans. Robust optimization incorporates setup and range uncertainties, which implicitly adds margins to both targets and OARs and is also able to compensate for perturbations in dose distributions within targets and OARs caused by uncertainties. In contrast, the traditional PTV-based optimization considers only setup uncertainties and adds a margin only to targets but no margins to the OARs. It also ignores range uncertainty. The purpose of this work is to determine if robustly optimized plans are superior to PTV-based plans simply because the latter do not assign margins to OARs during optimization. Methods: The authors retrospectively selected from their institutional database five patients with head and neck (H and N) cancer and one with prostate cancer for this analysis. Using their original images and prescriptions, the authors created new IMPT plans using three methods: PTV-based optimization, optimization based on the PTV and planning risk volumes (PRVs) (i.e., 'PTV+PRV-based optimization'), and robust optimization using the 'worst-case' dose distribution. The PRVs were generated by uniformly expanding OARs by 3 mm for the H and N cases and 5 mm for the prostate case. The dose-volume histograms (DVHs) from the worst-case dose distributions were used to assess and compare plan quality. Families of DVHs for each uncertainty for all structures of interest were plotted along with the nominal DVHs. The width of the 'bands' of DVHs was used to quantify the plan sensitivity to uncertainty. Results: Compared with conventional PTV-based and PTV+PRV-based planning, robust optimization led to a smaller bandwidth for the targets in the face of uncertainties {l
Energy Technology Data Exchange (ETDEWEB)
Osman, M.S. [High Institute of Technology, 10th Ramadan City (Egypt); Abo-Sinna, M.A.; Mousa, A.A. [Faculty of Engineering, Shebin El-Kom, Menoufia University (Egypt)
2009-11-15
In this paper, a novel multiobjective genetic algorithm approach for economic emission load dispatch (EELD) optimization problem is presented. The EELD problem is formulated as a non-linear constrained multiobjective optimization problem with both equality and inequality constraints. A new optimization algorithm which is based on concept of co-evolution and repair algorithm for handling non-linear constraints is presented. The algorithm maintains a finite-sized archive of non-dominated solutions which gets iteratively updated in the presence of new solutions based on the concept of {epsilon}-dominance. The use of {epsilon}-dominance also makes the algorithms practical by allowing a decision maker to control the resolution of the Pareto-set approximation by choosing an appropriate {epsilon} value. The proposed approach is carried out on the standard IEEE 30-bus 6-genrator test system. The results demonstrate the capabilities of the proposed approach to generate true and well-distributed Pareto-optimal non-dominated solutions of the multiobjective EELD problem in one single run. Simulation results with the proposed approach have been compared to those reported in the literature. The comparison demonstrates the superiority of the proposed approach and confirms its potential to solve the multiobjective EELD problem. (author)
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2017-08-07
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
Group Elevator Peak Scheduling Based on Robust Optimization Model
Zhang, J.; Q. Zong
2013-01-01
Scheduling of Elevator Group Control System (EGCS) is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization) method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is re...
Optimal Design of DC Electromagnets Based on Imposed Dynamic Characteristics
Directory of Open Access Journals (Sweden)
Sergiu Ivas
2016-10-01
Full Text Available In this paper is proposed a method for computing of optimal geometric dimensions of a DC electromagnet, based on the imposed dynamical characteristics. For obtaining the optimal design, it is built the criterion function in an analytic form that may be optimized in the order to find the constructive solution. Numerical simulations performed in Matlab software confirm the proposed work. The presented method can be extended to other electromagnetic devices which frequently operate in dynamic regime.
Support vector machines optimization based theory, algorithms, and extensions
Deng, Naiyang; Zhang, Chunhua
2013-01-01
Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi
Optimal, Reliability-Based Code Calibration
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2002-01-01
Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....
Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning
2014-12-10
Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.
Performance investigation of multigrid optimization for DNS-based optimal control problems
Nita, Cornelia; Vandewalle, Stefan; Meyers, Johan
2016-11-01
Optimal control theory in Direct Numerical Simulation (DNS) or Large-Eddy Simulation (LES) of turbulent flow involves large computational cost and memory overhead for the optimization of the controls. In this context, the minimization of the cost functional is typically achieved by employing gradient-based iterative methods such as quasi-Newton, truncated Newton or non-linear conjugate gradient. In the current work, we investigate the multigrid optimization strategy (MGOpt) in order to speed up the convergence of the damped L-BFGS algorithm for DNS-based optimal control problems. The method consists in a hierarchy of optimization problems defined on different representation levels aiming to reduce the computational resources associated with the cost functional improvement on the finest level. We examine the MGOpt efficiency for the optimization of an internal volume force distribution with the goal of reducing the turbulent kinetic energy or increasing the energy extraction in a turbulent wall-bounded flow; problems that are respectively related to drag reduction in boundary layers, or energy extraction in large wind farms. Results indicate that in some cases the multigrid optimization method requires up to a factor two less DNS and adjoint DNS than single-grid damped L-BFGS. The authors acknowledge support from OPTEC (OPTimization in Engineering Center of Excellence, KU Leuven, Grant No PFV/10/002).
portfolio optimization based on nonparametric estimation methods
Directory of Open Access Journals (Sweden)
mahsa ghandehari
2017-03-01
Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.
Affordance Learning Based on Subtask's Optimal Strategy
Directory of Open Access Journals (Sweden)
Huaqing Min
2015-08-01
Full Text Available Affordances define the relationships between the robot and environment, in terms of actions that the robot is able to perform. Prior work is mainly about predicting the possibility of a reactive action, and the object's affordance is invariable. However, in the domain of dynamic programming, a robot's task could often be decomposed into several subtasks, and each subtask could limit the search space. As a result, the robot only needs to replan its sub-strategy when an unexpected situation happens, and an object's affordance might change over time depending on the robot's state and current subtask. In this paper, we propose a novel affordance model linking the subtask, object, robot state and optimal action. An affordance represents the first action of the optimal strategy under the current subtask when detecting an object, and its influence is promoted from a primitive action to the subtask strategy. Furthermore, hierarchical reinforcement learning and state abstraction mechanism are introduced to learn the task graph and reduce state space. In the navigation experiment, the robot equipped with a camera could learn the objects' crucial characteristics, and gain their affordances in different subtasks.
Gradient-Based Cuckoo Search for Global Optimization
Directory of Open Access Journals (Sweden)
Seif-Eddeen K. Fateen
2014-01-01
Full Text Available One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.
OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation
Directory of Open Access Journals (Sweden)
Mehdi Abolfazli
2013-04-01
Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.
Menou, Edern; Ramstein, Gérard; Bertrand, Emmanuel; Tancret, Franck
2016-06-01
A new computational framework for systematic and optimal alloy design is introduced. It is based on a multi-objective genetic algorithm which allows (i) the screening of vast compositional ranges and (ii) the optimisation of the performance of novel alloys. Alloys performance is evaluated on the basis of their predicted constitutional and thermomechanical properties. To this end, the CALPHAD method is used for assessing equilibrium characteristics (such as constitution, stability or processability) while Gaussian processes provide an estimate of thermomechanical properties (such as tensile strength or creep resistance), based on a multi-variable non-linear regression of existing data. These three independently well-assessed tools were unified within a single C++ routine. The method was applied to the design of affordable nickel-base superalloys for service in power plants, providing numerous candidates with superior expected microstructural stability and strength. An overview of the metallurgy of optimised alloys, as well as two detailed examples of optimal alloys, suggest that improvements over current commercial alloys are achievable at lower costs.
Optimal capacitor sizing and placement based on real time analysis ...
African Journals Online (AJOL)
In this paper, optimal capacitor sizing and placement method was used to improve energy efficiency. It involves the placement of capacitors in a specific location with suitable sizing based on the current load of the electrical system. The optimization is done in real time scenario where the sizing and placement of the ...
MVMO-based approach for optimal placement and tuning of ...
African Journals Online (AJOL)
... optimal placement and coordinated tuning of power system supplementary damping controllers (POCDCs). The effectiveness of the approach is evaluated based on the classical IEEE 39-bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, ...
Optimal fractional order PID design via Tabu Search based algorithm.
Ateş, Abdullah; Yeroglu, Celaleddin
2016-01-01
This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A GIS-Based Optimization Technique for Spatial Location of ...
African Journals Online (AJOL)
GIS)-based package; TransCAD v. 5.0 was used to determine the optimal locations of one to ten waste bins. This optimization technique requires less computational time and the output of ten computer runs showed that partial service coverage ...
Space-Mapping-Based Interpolation for Engineering Optimization
DEFF Research Database (Denmark)
Koziel, Slawomir; Bandler, John W.; Madsen, Kaj
2006-01-01
We consider a simple and efficient space-mapping (SM) based interpolation scheme to work in conjunction with SM optimization algorithms. The technique is useful if the fine model (the one that is supposed to be optimized) is available only on a structured grid. It allows us to estimate the respon...
A novel particle swarm optimization based on population category
Wang, Jingying; Qu, Jianhua
2017-10-01
This paper raised a novel particle swarm optimization algorithm based on population category. Traditional particle swarm optimization algorithm is easily to trap in local optimum. In order to avoid standard algorithm appearing premature convergence, this novel algorithm use population category strategy to find new directions for particles. At last, computational results show that the new method is effective and has a high-performance.
Optimization of microgrids based on controller designing for ...
African Journals Online (AJOL)
The power quality of microgrid during islanded operation is strongly related with the controller performance of DGs. Therefore a new optimal control strategy for distributed generation based inverter to connect to the generalized microgrid is proposed. This work shows developing optimal control algorithms for the DG ...
Solution of optimal power flow using evolutionary-based algorithms
African Journals Online (AJOL)
Due to the drawbacks in classical methods, the artificial intelligence (AI) techniques have been introduced to solve the OPF problem. The AI-based optimization has become an important approach for determining the global optimal solution. One of the most important intelligent search techniques is called evolutionary ...
A modified harmony search based method for optimal rural radial ...
African Journals Online (AJOL)
In this work, a Harmony Search (HS) based optimization approach is developed to solve the radial line planning problem. Furthermore, some modifications to the HS are presented for improving the computational efficiency of optimization problems with strongly interrelated mixed variables. A sample system is served for ...
Optimal Reliability-Based Planning of Experiments for POD Curves
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Faber, M. H.; Kroon, I. B.
Optimal planning of the crack detection test is considered. The test are used to update the information on the reliability of the inspection techniques modelled by probability of detection (P.O.D.) curves. It is shown how cost-optimal and reliability based test plans can be obtained using First...
Wang, Hongsheng; Zheng, Naiqaun Nigel
2010-12-01
Skin marker-based motion analysis has been widely used in biomechanical studies and clinical applications. Unfortunately, the accuracy of knee joint secondary motions is largely limited by the nonrigidity nature of human body segments. Numerous studies have investigated the characteristics of soft tissue movement. Utilizing these characteristics, we may improve the accuracy of knee joint motion measurement. An optimizer was developed by incorporating the soft tissue movement patterns at special bony landmarks into constraint functions. Bony landmark constraints were assigned to the skin markers at femur epicondyles, tibial plateau edges, and tibial tuberosity in a motion analysis algorithm by limiting their allowed position space relative to the underlying bone. The rotation matrix was represented by quaternion, and the constrained optimization problem was solved by Fletcher's version of the Levenberg-Marquardt optimization technique. The algorithm was validated by using motion data from both skin-based markers and bone-mounted markers attached to fresh cadavers. By comparing the results with the ground truth bone motion generated from the bone-mounted markers, the new algorithm had a significantly higher accuracy (root-mean-square (RMS) error: 0.7 ± 0.1 deg in axial rotation and 0.4 ± 0.1 deg in varus-valgus) in estimating the knee joint secondary rotations than algorithms without bony landmark constraints (RMS error: 1.7 ± 0.4 deg in axial rotation and 0.7 ± 0.1 deg in varus-valgus). Also, it predicts a more accurate medial-lateral translation (RMS error: 0.4 ± 0.1 mm) than the conventional techniques (RMS error: 1.2 ± 0.2 mm). The new algorithm, using bony landmark constrains, estimates more accurate secondary rotations and medial-lateral translation of the underlying bone.
Optimization of transmission system design based on genetic algorithm
Directory of Open Access Journals (Sweden)
Xianbing Chen
2016-05-01
Full Text Available Transmission system is a crucial precision mechanism for twin-screw chemi-mechanical pulping equipment. The structure of the system designed by traditional method is not optimal because the structure designed by the traditional methods is easy to fall into the local optimum. To achieve the global optimum, this article applies the genetic algorithm which has grown in recent years in the field of structure optimization. The article uses the volume of transmission system as the objective function to optimize the structure designed by traditional method. Compared to the simulation results, the original structure is not optimal, and the optimized structure is tighter and more reasonable. Based on the optimized results, the transmission shafts in the transmission system are designed and checked, and the parameters of the twin screw are selected and calculated. The article provided an effective method to design the structure of transmission system.
Le Corre, Lucille; Sanchez, Juan A.; Reddy, Vishnu; Takir, Driss; Cloutis, Edward A.; Thirouin, Audrey; Becker, Kris J.; Li, Jian-Yang; Sugita, Seiji; Tatsumi, Eri
2018-03-01
Asteroids that are targets of spacecraft missions are interesting because they present us with an opportunity to validate ground-based spectral observations. One such object is near-Earth asteroid (NEA) (162173) Ryugu, which is the target of the Japanese Space Agency's (JAXA) Hayabusa2 sample return mission. We observed Ryugu using the 3-m NASA Infrared Telescope Facility on Mauna Kea, Hawaii, on 2016 July 13 to constrain the object's surface composition, meteorite analogues, and link to other asteroids in the main belt and NEA populations. We also modelled its photometric properties using archival data. Using the Lommel-Seeliger model we computed the predicted flux for Ryugu at a wide range of viewing geometries as well as albedo quantities such as geometric albedo, phase integral, and spherical Bond albedo. Our computed albedo quantities are consistent with results from Ishiguro et al. Our spectral analysis has found a near-perfect match between our spectrum of Ryugu and those of NEA (85275) 1994 LY and Mars-crossing asteroid (316720) 1998 BE7, suggesting that their surface regoliths have similar composition. We compared Ryugu's spectrum with that of main belt asteroid (302) Clarissa, the largest asteroid in the Clarissa asteroid family, suggested as a possible source of Ryugu by Campins et al. We found that the spectrum of Clarissa shows significant differences with our spectrum of Ryugu, but it is similar to the spectrum obtained by Moskovitz et al. The best possible meteorite analogues for our spectrum of Ryugu are two CM2 carbonaceous chondrites, Mighei and ALH83100.
DEFF Research Database (Denmark)
Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe
2010-01-01
limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...
Directory of Open Access Journals (Sweden)
Yan Sun
2015-09-01
Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.
Workshop on Computational Optimization
2015-01-01
Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.
Stochastic learning and optimization a sensitivity-based approach
Cao, Xi-Ren
2007-01-01
Performance optimization is vital in the design and operation of modern engineering systems. This book provides a unified framework based on a sensitivity point of view. It introduces new approaches and proposes new research topics.
Location based Network Optimizations for Mobile Wireless Networks
DEFF Research Database (Denmark)
Nielsen, Jimmy Jessen
The availability of location information in mobile devices, e.g., through built-in GPS receivers in smart phones, has motivated the investigation of the usefulness of location based network optimizations. Since the quality of input information is important for network optimizations, a main focus...... of this work is to evaluate how location based network optimizations are affected by varying quality of input information such as location information and user movements. The first contribution in this thesis concerns cooperative network-based localization systems. The investigations focus on assessing...... the achievable accuracy of future localization system in mobile settings, as well as quantifying the impact of having a realistic model of the required measurement exchanges. Secondly, this work has considered different large scale and small scale location based network optimizations, namely centralized relay...
Lightweight cryptography for constrained devices
DEFF Research Database (Denmark)
Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco
2014-01-01
Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....
Adaptive Central Force Optimization Algorithm Based on the Stability Analysis
Directory of Open Access Journals (Sweden)
Weiyi Qian
2015-01-01
Full Text Available In order to enhance the convergence capability of the central force optimization (CFO algorithm, an adaptive central force optimization (ACFO algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.
Reliability-Based Structural Optimization of Wave Energy Converters
DEFF Research Database (Denmark)
Ambühl, Simon; Kramer, Morten; Sørensen, John Dalsgaard
2014-01-01
More and more wave energy converter (WEC) concepts are reaching prototype level. Once the prototype level is reached, the next step in order to further decrease the levelized cost of energy (LCOE) is optimizing the overall system with a focus on structural and maintenance (inspection) costs......, as well as on the harvested power from the waves. The target of a fully-developed WEC technology is not maximizing its power output, but minimizing the resulting LCOE. This paper presents a methodology to optimize the structural design of WECs based on a reliability-based optimization problem...
Optimal Reliability-Based Planning of Experiments for POD Curves
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Faber, Michael Havbro; Kroon, I. B.
Optimal planning of the crack detection test is considered. The test are used to update the information on the reliability of the inspection techniques modelled by probability of detection (P.O.D.) curves. It is shown how cost-optimal and reliability based test plans can be obtained using First O...... Order Reliability Methods in combination with life-cycle cost-optimal inspection and maintenance planning. The methodology is based on preposterior analyses from Bayesian decision theory. An illustrative example is shown....
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Inferring meaningful communities from topology-constrained correlation networks.
Hleap, Jose Sergio; Blouin, Christian
2014-01-01
Community structure detection is an important tool in graph analysis. This can be done, among other ways, by solving for the partition set which optimizes the modularity scores [Formula: see text]. Here it is shown that topological constraints in correlation graphs induce over-fragmentation of community structures. A refinement step to this optimization based on Linear Discriminant Analysis (LDA) and a statistical test for significance is proposed. In structured simulation constrained by topology, this novel approach performs better than the optimization of modularity alone. This method was also tested with two empirical datasets: the Roll-Call voting in the 110th US Senate constrained by geographic adjacency, and a biological dataset of 135 protein structures constrained by inter-residue contacts. The former dataset showed sub-structures in the communities that revealed a regional bias in the votes which transcend party affiliations. This is an interesting pattern given that the 110th Legislature was assumed to be a highly polarized government. The [Formula: see text]-amylase catalytic domain dataset (biological dataset) was analyzed with and without topological constraints (inter-residue contacts). The results without topological constraints showed differences with the topology constrained one, but the LDA filtering did not change the outcome of the latter. This suggests that the LDA filtering is a robust way to solve the possible over-fragmentation when present, and that this method will not affect the results where there is no evidence of over-fragmentation.
Inferring meaningful communities from topology-constrained correlation networks.
Directory of Open Access Journals (Sweden)
Jose Sergio Hleap
Full Text Available Community structure detection is an important tool in graph analysis. This can be done, among other ways, by solving for the partition set which optimizes the modularity scores [Formula: see text]. Here it is shown that topological constraints in correlation graphs induce over-fragmentation of community structures. A refinement step to this optimization based on Linear Discriminant Analysis (LDA and a statistical test for significance is proposed. In structured simulation constrained by topology, this novel approach performs better than the optimization of modularity alone. This method was also tested with two empirical datasets: the Roll-Call voting in the 110th US Senate constrained by geographic adjacency, and a biological dataset of 135 protein structures constrained by inter-residue contacts. The former dataset showed sub-structures in the communities that revealed a regional bias in the votes which transcend party affiliations. This is an interesting pattern given that the 110th Legislature was assumed to be a highly polarized government. The [Formula: see text]-amylase catalytic domain dataset (biological dataset was analyzed with and without topological constraints (inter-residue contacts. The results without topological constraints showed differences with the topology constrained one, but the LDA filtering did not change the outcome of the latter. This suggests that the LDA filtering is a robust way to solve the possible over-fragmentation when present, and that this method will not affect the results where there is no evidence of over-fragmentation.
Directory of Open Access Journals (Sweden)
Fei Wang
2017-07-01
Full Text Available The optimized dispatch of different distributed generations (DGs in stand-alone microgrid (MG is of great significance to the operation’s reliability and economy, especially for energy crisis and environmental pollution. Based on controllable load (CL and combined cooling-heating-power (CCHP model of micro-gas turbine (MT, a multi-objective optimization model with relevant constraints to optimize the generation cost, load cut compensation and environmental benefit is proposed in this paper. The MG studied in this paper consists of photovoltaic (PV, wind turbine (WT, fuel cell (FC, diesel engine (DE, MT and energy storage (ES. Four typical scenarios were designed according to different day types (work day or weekend and weather conditions (sunny or rainy in view of the uncertainty of renewable energy in variable situations and load fluctuation. A modified dispatch strategy for CCHP is presented to further improve the operation economy without reducing the consumers’ comfort feeling. Chaotic optimization and elite retention strategy are introduced into basic particle swarm optimization (PSO to propose modified chaos particle swarm optimization (MCPSO whose search capability and convergence speed are improved greatly. Simulation results validate the correctness of the proposed model and the effectiveness of MCPSO algorithm in the optimized operation application of stand-alone MG.
Simulation of Evapotranspiration using an Optimality-based Ecohydrological Model
Chen, Lajiao
2014-05-01
Accurate estimation of evapotranspiration (ET) is essential in understanding the effect of climate change and human activities on ecosystem and water resource. As an important tool for ET estimation, most of the traditional hydrological or ecohydrological models treat ET as a physical process, controlled by energy, vapor, pressure and turbulence. It is at times questionable as transpiration, major component of ET, is biological activity closely linked to photosynthesis by stomatal conductivity. Optimality-based ecohydrological models consider the mutual interaction of ET and photosynthesis based on optimality principle. However, as a rising generation of ecohydrological models, so far there are only a few applications of the optimality-based model in different ecosystems. The ability and reliability of this kind of models for ecohydrological modeling need to be validated in more ecosystems. The objective of this study is to validate the optimality hypothesis for water-limited ecosystem. To achieve this, the study applied an optimality-based model Vegetation Optimality Model (VOM) to simulate ET and its components based on optimality principle. The model is applied in a semiarid watershed. The simulated ET and soil waster were compared with long term measurement data in Kendall and Lcukyhill sites in the watershed. The result showed that the temporal variations of simulated ET and soil water are in good agreement with observed data. Temporal dynamic of soil evaporation and transpiration and their response to precipitation events can be well captured with the model. This could come to a conclusion the optimality-based ecohydrological model could be a potential approach to simulate ET.
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
GENETIC ALGORITHM BASED CONCEPT DESIGN TO OPTIMIZE NETWORK LOAD BALANCE
Directory of Open Access Journals (Sweden)
Ashish Jain
2012-07-01
Full Text Available Multiconstraints optimal network load balancing is an NP-hard problem and it is an important part of traffic engineering. In this research we balance the network load using classical method (brute force approach and dynamic programming is used but result shows the limitation of this method but at a certain level we recognized that the optimization of balanced network load with increased number of nodes and demands is intractable using the classical method because the solution set increases exponentially. In such case the optimization techniques like evolutionary techniques can employ for optimizing network load balance. In this paper we analyzed proposed classical algorithm and evolutionary based genetic approach is devise as well as proposed in this paper for optimizing the balance network load.
Inferring biomolecular interaction networks based on convex optimization.
Han, Soohee; Yoon, Yeoin; Cho, Kwang-Hyun
2007-10-01
We present an optimization-based inference scheme to unravel the functional interaction structure of biomolecular components within a cell. The regulatory network of a cell is inferred from the data obtained by perturbation of adjustable parameters or initial concentrations of specific components. It turns out that the identification procedure leads to a convex optimization problem with regularization as we have to achieve the sparsity of a network and also reflect any a priori information on the network structure. Since the convex optimization has been well studied for a long time, a variety of efficient algorithms were developed and many numerical solvers are freely available. In order to estimate time derivatives from discrete-time samples, a cubic spline fitting is incorporated into the proposed optimization procedure. Throughout simulation studies on several examples, it is shown that the proposed convex optimization scheme can effectively uncover the functional interaction structure of a biomolecular regulatory network with reasonable accuracy.
Length scale and manufacturability in density-based topology optimization
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Wang, Fengwen; Sigmund, Ole
2016-01-01
performance and in many cases can completely destroy the optimality of the solution. Therefore, the goal of this paper is to review recent advancements in obtaining manufacturable topology-optimized designs. The focus is on methods for imposing minimum and maximum length scales, and ensuring manufacturable......Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can...... provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design...
PRODUCTION OPTIMIZATION USING AGENT-BASED SYSTEM
Directory of Open Access Journals (Sweden)
Aleksandar Vujovic
2016-03-01
Full Text Available Production systems suffer frequent changes due to the growing demand and need for providing market competitiveness. Therefore, the application of intelligent systems can greatly increase the level of flexibility and efficiency, but also reduce the overall costs. On example of system for the production of irregular and variable shaped parts by cutting the wooden flat surfaces, it is discussed the possibility of applying intelligent agent-based system. In order to implement it in the production process, it was necessary to firstly performed an analysis and assessment of the initial situation. Then, we spotted a weak points and gave some suggestions to improve process by application of agents. The obtained solution has reduced the number of engaged workers, reduced the scope of their duties, made faster flow of materials, improved its utilization and we finally introduced the scheme of the new agent-based manufacturing process that achieves the foregoing benefits.
Energy Technology Data Exchange (ETDEWEB)
Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2010-01-15
In this study, an effective method for reliability-based design optimization (RBDO) is proposed enhancing sequential optimization and reliability assessment (SORA) method by convex approximations. In SORA, reliability estimation and deterministic optimization are performed sequentially. The sensitivity and function value of probabilistic constraint at the most probable point (MPP) are obtained in the reliability analysis loop. In this study, the convex approximations for probabilistic constraint are constructed by utilizing the sensitivity and function value of the probabilistic constraint at the MPP. Hence, the proposed method requires much less function evaluations of probabilistic constraints in the deterministic optimization than the original SORA method. The efficiency and accuracy of the proposed method were verified through numerical examples
Kim, Yoon Jae; Kim, Yoon Young
2010-10-01
This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.
Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Chaudhary, Kailash; Chaudhary, Himanshu [Malaviya National Institute of Technology, Jaipur (Malaysia)
2015-11-15
In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).
A new efficient optimal path planner for mobile robot based on Invasive Weed Optimization algorithm
Mohanty, Prases K.; Parhi, Dayal R.
2014-12-01
Planning of the shortest/optimal route is essential for efficient operation of autonomous mobile robot or vehicle. In this paper Invasive Weed Optimization (IWO), a new meta-heuristic algorithm, has been implemented for solving the path planning problem of mobile robot in partially or totally unknown environments. This meta-heuristic optimization is based on the colonizing property of weeds. First we have framed an objective function that satisfied the conditions of obstacle avoidance and target seeking behavior of robot in partially or completely unknown environments. Depending upon the value of objective function of each weed in colony, the robot avoids obstacles and proceeds towards destination. The optimal trajectory is generated with this navigational algorithm when robot reaches its destination. The effectiveness, feasibility, and robustness of the proposed algorithm has been demonstrated through series of simulation and experimental results. Finally, it has been found that the developed path planning algorithm can be effectively applied to any kinds of complex situation.
Image quality improvement of multi-projection 3D display through tone mapping based optimization.
Wang, Peng; Sang, Xinzhu; Zhu, Yanhong; Xie, Songlin; Chen, Duo; Guo, Nan; Yu, Chongxiu
2017-08-21
An optical 3D screen usually shows a certain diffuse reflectivity or diffuse transmission, and the multi-projection 3D display suffers from decreased display local contrast due to the crosstalk of multi-projection contents. A tone mapping based optimizing method is innovatively proposed to suppress the crosstalk and improve the display contrast by minimizing the visible contrast distortions between the display light field and a targeted one with enhanced contrast. The contrast distortions are weighted according to the visibility predicted by the model of human visual system, and the distortions are minimized for the given multi-projection 3D display model that enforces constrains on the solution. Our proposed method can adjust parallax images or parallax video contents for the optimum 3D display image quality taking into account the display characteristics and ambient illumination. The validity of the method is evaluated and proved in experiments.
Pamay, Mehmet Berke
2011-01-01
This study addresses the Resource Constrained Multi Project Scheduling Problem with Weighted Earliness Tardiness Costs (RCMPSPWET). In multi-project environments, the project portfolio of a company does often change dramatically in time. In this dynamic context, the arrival of a new project requires quoting a due date while keeping the disruptions to the existing plans and schedules to a minimum. The suggested solution method is an adaptation of the well known shifting bottleneck (SB) h...
Optimal and Robust Quantum Metrology Using Interaction-Based Readouts
Nolan, Samuel P.; Szigeti, Stuart S.; Haine, Simon A.
2017-11-01
Useful quantum metrology requires nonclassical states with a high particle number and (close to) the optimal exploitation of the state's quantum correlations. Unfortunately, the single-particle detection resolution demanded by conventional protocols, such as spin squeezing via one-axis twisting, places severe limits on the particle number. Additionally, the challenge of finding optimal measurements (that saturate the quantum Cramér-Rao bound) for an arbitrary nonclassical state limits most metrological protocols to only moderate levels of quantum enhancement. "Interaction-based readout" protocols have been shown to allow optimal interferometry or to provide robustness against detection noise at the expense of optimality. In this Letter, we prove that one has great flexibility in constructing an optimal protocol, thereby allowing it to also be robust to detection noise. This requires the full probability distribution of outcomes in an optimal measurement basis, which is typically easily accessible and can be determined from specific criteria we provide. Additionally, we quantify the robustness of several classes of interaction-based readouts under realistic experimental constraints. We determine that optimal and robust quantum metrology is achievable in current spin-squeezing experiments.
Orthogonal Analysis Based Performance Optimization for Vertical Axis Wind Turbine
Directory of Open Access Journals (Sweden)
Lei Song
2016-01-01
Full Text Available Geometrical shape of a vertical axis wind turbine (VAWT is composed of multiple structural parameters. Since there are interactions among the structural parameters, traditional research approaches, which usually focus on one parameter at a time, cannot obtain performance of the wind turbine accurately. In order to exploit overall effect of a novel VAWT, we firstly use a single parameter optimization method to obtain optimal values of the structural parameters, respectively, by Computational Fluid Dynamics (CFD method; based on the results, we then use an orthogonal analysis method to investigate the influence of interactions of the structural parameters on performance of the wind turbine and to obtain optimization combination of the structural parameters considering the interactions. Results of analysis of variance indicate that interactions among the structural parameters have influence on performance of the wind turbine, and optimization results based on orthogonal analysis have higher wind energy utilization than that of traditional research approaches.
Particle swarm optimization based space debris surveillance network scheduling
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Maximum length scale in density based topology optimization
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
Cooperative Game Study of Airlines Based on Flight Frequency Optimization
Directory of Open Access Journals (Sweden)
Wanming Liu
2014-01-01
Full Text Available By applying the game theory, the relationship between airline ticket price and optimal flight frequency is analyzed. The paper establishes the payoff matrix of the flight frequency in noncooperation scenario and flight frequency optimization model in cooperation scenario. The airline alliance profit distribution is converted into profit distribution game based on the cooperation game theory. The profit distribution game is proved to be convex, and there exists an optimal distribution strategy. The results show that joining the airline alliance can increase airline whole profit, the change of negotiated prices and cost is beneficial to profit distribution of large airlines, and the distribution result is in accordance with aviation development.
Genetic-evolution-based optimization methods for engineering design
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Perspective texture synthesis based on improved energy optimization.
Directory of Open Access Journals (Sweden)
Syed Muhammad Arsalan Bashir
Full Text Available Perspective texture synthesis has great significance in many fields like video editing, scene capturing etc., due to its ability to read and control global feature information. In this paper, we present a novel example-based, specifically energy optimization-based algorithm, to synthesize perspective textures. Energy optimization technique is a pixel-based approach, so it's time-consuming. We improve it from two aspects with the purpose of achieving faster synthesis and high quality. Firstly, we change this pixel-based technique by replacing the pixel computation with a little patch. Secondly, we present a novel technique to accelerate searching nearest neighborhoods in energy optimization. Using k- means clustering technique to build a search tree to accelerate the search. Hence, we make use of principal component analysis (PCA technique to reduce dimensions of input vectors. The high quality results prove that our approach is feasible. Besides, our proposed algorithm needs shorter time relative to other similar methods.
An opinion formation based binary optimization approach for feature selection
Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo
2018-02-01
This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.
Applying BAT Evolutionary Optimization to Image-Based Visual Servoing
Directory of Open Access Journals (Sweden)
Marco Perez-Cisneros
2015-01-01
Full Text Available This paper presents a predictive control strategy for an image-based visual servoing scheme that employs evolutionary optimization. The visual control task is approached as a nonlinear optimization problem that naturally handles relevant visual servoing constraints such as workspace limitations and visibility restrictions. As the predictive scheme requires a reliable model, this paper uses a local model that is based on the visual interaction matrix and a global model that employs 3D trajectory data extracted from a quaternion-based interpolator. The work assumes a free-flying camera with 6-DOF simulation whose results support the discussion on the constraint handling and the image prediction scheme.
Optimization of wireless sensor networks based on chicken swarm optimization algorithm
Wang, Qingxi; Zhu, Lihua
2017-05-01
In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.
Directory of Open Access Journals (Sweden)
A. P. Karpenko
2014-01-01
Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.
Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs
Directory of Open Access Journals (Sweden)
Jiajun Liu
2017-10-01
Full Text Available Energy storage systems (ESS play an important role in the performance of mining vehicles. A hybrid ESS combining both batteries (BTs and supercapacitors (SCs is one of the most promising solutions. As a case study, this paper discusses the optimal hybrid ESS sizing and energy management strategy (EMS of 14-ton underground load-haul-dump vehicles (LHDs. Three novel contributions are added to the relevant literature. First, a multi-objective optimization is formulated regarding energy consumption and the total cost of a hybrid ESS, which are the key factors of LHDs, and a battery capacity degradation model is used. During the process, dynamic programming (DP-based EMS is employed to obtain the optimal energy consumption and hybrid ESS power profiles. Second, a 10-year life cycle cost model of a hybrid ESS for LHDs is established to calculate the total cost, including capital cost, operating cost, and replacement cost. According to the optimization results, three solutions chosen from the Pareto front are compared comprehensively, and the optimal one is selected. Finally, the optimal and battery-only options are compared quantitatively using the same objectives, and the hybrid ESS is found to be a more economical and efficient option.
Spinning Reserve Requirements Optimization Based on an Improved Multiscenario Risk Analysis Method
Directory of Open Access Journals (Sweden)
Liudong Zhang
2017-01-01
Full Text Available This paper proposes a novel security-constrained unit commitment model to calculate the optimal spinning reserve (SR amount. The model combines cost-benefit analysis with an improved multiscenario risk analysis method capable of considering various uncertainties, including load and wind power forecast errors as well as forced outages of generators. In this model, cost-benefit analysis is utilized to simultaneously minimize the operation cost of conventional generators, the expected cost of load shedding, the penalty cost of wind power spillage, and the carbon emission cost. It remedies the defects of the deterministic and probabilistic methods of SR calculation. In cases where load and wind power generation are negatively correlated, this model based on multistep modeling of net demand can consider the wind power curtailment to maximize the overall economic efficiency of system operation so that the optimal economic values of wind power and SR are achieved. In addition, the impact of the nonnormal probability distributions of wind power forecast error on SR optimization can be taken into account. Using mixed integer linear programming method, simulation studies on a modified IEEE 26-generator reliability test system connected to a wind farm are performed to confirm the effectiveness and advantage of the proposed model.
A Dynamic Optimization Method of Indoor Fire Evacuation Route Based on Real-time Situation Awareness
Directory of Open Access Journals (Sweden)
DING Yulin
2016-12-01
Full Text Available How to provide safe and effective evacuation routes is an important safeguard to correctly guide evacuation and reduce the casualties during the fire situation rapidly evolving in complex indoor environment. The traditional static path finding method is difficult to adjust the path adaptively according to the changing fire situation, which lead to the evacuation decision-making blindness and hysteresis. This paper proposes a dynamic method which can dynamically optimize the indoor evacuation routes based on the real-time situation awareness. According to the real-time perception of fire situation parameters and the changing indoor environment information, the evacuation route is optimized dynamically. The integrated representation of multisource indoor fire monitoring sensor observations oriented fire emergency evacuation is presented at first, real-time fire threat situation information inside building is then extracted from the observation data of multi-source sensors, which is used to constrain the dynamical optimization of the topology of the evacuation route. Finally, the simulation experiments prove that this method can improve the accuracy and efficiency of indoor evacuation routing.
Gradient-based methods for production optimization of oil reservoirs
Energy Technology Data Exchange (ETDEWEB)
Suwartadi, Eka
2012-07-01
Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM
Cover crop-based ecological weed management: exploration and optimization
Kruidhof, H.M.
2008-01-01
Keywords: organic farming, ecologically-based weed management, cover crops, green manure, allelopathy, Secale cereale, Brassica napus, Medicago sativa Cover crop-based ecological weed management: exploration and optimization. In organic farming systems, weed control is recognized as one of the
SU-G-BRA-03: PCA Based Imaging Angle Optimization for 2D Cine MRI Based Radiotherapy Guidance
Energy Technology Data Exchange (ETDEWEB)
Chen, T; Yue, N; Jabbour, S; Zhang, M [Rutgers University, New Brunswick, NJ (United States)
2016-06-15
Purpose: To develop an imaging angle optimization methodology for orthogonal 2D cine MRI based radiotherapy guidance using Principal Component Analysis (PCA) of target motion retrieved from 4DCT. Methods: We retrospectively analyzed 4DCT of 6 patients with lung tumor. A radiation oncologist manually contoured the target volume at the maximal inhalation phase of the respiratory cycle. An object constrained deformable image registration (DIR) method has been developed to track the target motion along the respiration at ten phases. The motion of the center of the target mass has been analyzed using the PCA to find out the principal motion components that were uncorrelated with each other. Two orthogonal image planes for cineMRI have been determined using this method to minimize the through plane motion during MRI based radiotherapy guidance. Results: 3D target respiratory motion for all 6 patients has been efficiently retrieved from 4DCT. In this process, the object constrained DIR demonstrated satisfactory accuracy and efficiency to enable the automatic motion tracking for clinical application. The average motion amplitude in the AP, lateral, and longitudinal directions were 3.6mm (min: 1.6mm, max: 5.6mm), 1.7mm (min: 0.6mm, max: 2.7mm), and 5.6mm (min: 1.8mm, max: 16.1mm), respectively. Based on PCA, the optimal orthogonal imaging planes were determined for cineMRI. The average angular difference between the PCA determined imaging planes and the traditional AP and lateral imaging planes were 47 and 31 degrees, respectively. After optimization, the average amplitude of through plane motion reduced from 3.6mm in AP images to 2.5mm (min:1.3mm, max:3.9mm); and from 1.7mm in lateral images to 0.6mm (min: 0.2mm, max:1.5mm), while the principal in plane motion amplitude increased from 5.6mm to 6.5mm (min: 2.8mm, max: 17mm). Conclusion: DIR and PCA can be used to optimize the orthogonal image planes of cineMRI to minimize the through plane motion during radiotherapy
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
Configuration-based optimization for six degree-of-freedom haptic rendering for fine manipulation.
Dangxiao Wang; Xin Zhang; Yuru Zhang; Jing Xiao
2013-01-01
Six-degree-of-freedom (6-DOF) haptic rendering for fine manipulation in narrow space is a challenging topic because of frequent constraint changes caused by small tool movement and the requirement to preserve the feel of fine-features of objects. In this paper, we introduce a configuration-based constrained optimization method for solving this rendering problem. We represent an object using a hierarchy of spheres, i.e., a sphere tree, which allows faster detection of multiple contacts/collisions among objects than polygonal mesh and facilitates contact constraint formulation. Given a moving graphic tool as the avatar of the haptic tool in the virtual environment, we compute its quasi-static motion by solving a configuration-based optimization. The constraints in the 6D configuration space of the graphic tool is obtained and updated through online mapping of the nonpenetration constraint between the spheres of the graphic tool and those of the other objects in the three-dimensional physical space, based on the result of collision detection. This problem is further modeled as a quadratic programming optimization and solved by the classic active-set methods. Our algorithm has been implemented and interfaced with a 6-DOF Phantom Premium 3.0. We demonstrate its performance in several benchmarks involving complex, multiregion contacts. The experimental results show both the high efficiency and stability of haptic rendering by our method for complex scenarios. Nonpenetration between the graphic tool and the object is maintained under frequent contact switches. Update rate of the simulation loop including optimization and constraint identification is maintained at about 1 kHz.
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-11-25
In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Workshop on Computational Optimization
2016-01-01
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.
Structural Optimization of Slender Robot Arm Based on Sensitivity Analysis
Directory of Open Access Journals (Sweden)
Zhong Luo
2012-01-01
Full Text Available An effective structural optimization method based on a sensitivity analysis is proposed to optimize the variable section of a slender robot arm. The structure mechanism and the operating principle of a polishing robot are introduced firstly, and its stiffness model is established. Then, a design of sensitivity analysis method and a sequential linear programming (SLP strategy are developed. At the beginning of the optimization, the design sensitivity analysis method is applied to select the sensitive design variables which can make the optimized results more efficient and accurate. In addition, it can also be used to determine the scale of moving step which will improve the convergency during the optimization process. The design sensitivities are calculated using the finite difference method. The search for the final optimal structure is performed using the SLP method. Simulation results show that the proposed structure optimization method is effective in enhancing the stiffness of the robot arm regardless of the robot arm suffering either a constant force or variable forces.
Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.
Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao
2015-04-01
Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fatigue reliability based optimal design of planar compliant micropositioning stages
Wang, Qiliang; Zhang, Xianmin
2015-10-01
Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.
Directory of Open Access Journals (Sweden)
Xiangyun Liao
Full Text Available To overcome the severe intensity inhomogeneity and blurry boundaries in HIFU (High Intensity Focused Ultrasound ultrasound images, an accurate and efficient multi-scale and shape constrained localized region-based active contour model (MSLCV, was developed to accurately and efficiently segment the target region in HIFU ultrasound images of uterine fibroids.We incorporated a new shape constraint into the localized region-based active contour, which constrained the active contour to obtain the desired, accurate segmentation, avoiding boundary leakage and excessive contraction. Localized region-based active contour modeling is suitable for ultrasound images, but it still cannot acquire satisfactory segmentation for HIFU ultrasound images of uterine fibroids. We improved the localized region-based active contour model by incorporating a shape constraint into region-based level set framework to increase segmentation accuracy. Some improvement measures were proposed to overcome the sensitivity of initialization, and a multi-scale segmentation method was proposed to improve segmentation efficiency. We also designed an adaptive localizing radius size selection function to acquire better segmentation results.Experimental results demonstrated that the MSLCV model was significantly more accurate and efficient than conventional methods. The MSLCV model has been quantitatively validated via experiments, obtaining an average of 0.94 for the DSC (Dice similarity coefficient and 25.16 for the MSSD (mean sum of square distance. Moreover, by using the multi-scale segmentation method, the MSLCV model's average segmentation time was decreased to approximately 1/8 that of the localized region-based active contour model (the LCV model.An accurate and efficient multi-scale and shape constrained localized region-based active contour model was designed for the semi-automatic segmentation of uterine fibroid ultrasound (UFUS images in HIFU therapy. Compared with other
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Directory of Open Access Journals (Sweden)
Mohammed Hasan Abdulameer
2014-01-01
Full Text Available Existing face recognition methods utilize particle swarm optimizer (PSO and opposition based particle swarm optimizer (OPSO to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM. In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.
Support vector machine based on adaptive acceleration particle swarm optimization.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.
Optimizing medical data quality based on multiagent web service framework.
Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant
2012-07-01
One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented.
Reliability-Based Multidisciplinary Design Optimization under Correlated Uncertainties
Directory of Open Access Journals (Sweden)
Huanwei Xu
2017-01-01
Full Text Available Complex mechanical system is usually composed of several subsystems, which are often coupled with each other. Reliability-based multidisciplinary design optimization (RBMDO is an efficient method to design such complex system under uncertainties. However, the present RBMDO methods ignored the correlations between uncertainties. In this paper, through combining the ellipsoidal set theory and first-order reliability method (FORM for multidisciplinary design optimization (MDO, characteristics of correlated uncertainties are investigated. Furthermore, to improve computational efficiency, the sequential optimization and reliability assessment (SORA strategy is utilized to obtain the optimization result. Both a mathematical example and a case study of an engineering system are provided to illustrate the feasibility and validity of the proposed method.
A topology optimization based model of bone adaptation.
Rossi, Jean-Marie; Wendling-Mansuy, Sylvie
2007-12-01
A novel topology optimization model based on homogenization methods was developed for predicting bone density distribution and anisotropy, assuming the bone structure to be a self-optimizing biological material which maximizes its own structural stiffness. The feasibility and efficiency of this method were tested on a 2D model for a proximal femur under single and multiple loading conditions. The main aim was to compute homogenized optimal designs using an optimal laminated microstructure. The computational results showed that high bone density levels are distributed along the diaphysis and form arching struts within the femoral head. The pattern of bone density distribution and the anisotropic bone behavior predicted by the model in the multiple load case were both in good agreement with the structural architecture and bone density distribution occurring in natural femora. This approach provides a novel means of understanding the remodeling processes involved in fracture repair and the treatment of bone diseases.
Optimization and Design of Wideband Antenna Based on Q Factor
Directory of Open Access Journals (Sweden)
Han Liu
2015-01-01
Full Text Available A wideband antenna is designed based on Q factor in this paper. Firstly, the volume-surface integral equations (VSIEs and self-adaptive differential evolution algorithm (DEA are introduced as the basic theories to optimize antennas. Secondly, we study the computation of Q of arbitrary shaped structures, aiming at designing an antenna with maximum bandwidth by minimizing the Q of the antenna. This method is much more efficient for only Q values at specific frequency points that are computed, which avoids optimizing bandwidth directly. Thirdly, an integrated method combining the above method with VSIEs and self-adaptive DEA is employed to optimize the wideband antenna, extending its bandwidth from 11.5~16.5 GHz to 7~20 GHz. Lastly, the optimized antenna is fabricated and measured. The measured results are consistent with the simulated results, demonstrating the feasibility and effectiveness of the proposed method.
Techniques for Component-Based Software Architecture Optimization
Directory of Open Access Journals (Sweden)
Adil Ali Abdelaziz
2012-06-01
Full Text Available Although Component-Based System (CBS increases the efficiency of development and reduces the need for maintenance, but even good quality components could fail to compose good product if the composition is not managed appropriately. In real world, such as industrial automation domain, this probability is unacceptable because additional measures, time, efforts, and costs are required to minimize its impacts. Many general optimization approaches have been proposed in literature to manage the composition of system at early stage of development. This paper investigates recent approach es used to optimize software architecture. The results of this study are important since it will be used to develop an efficient optimization framework to optimize software architecture in next step of our ongoing research.
Optimizing Orthogonal Multiple Access based on Quantized Channel State Information
Marques, Antonio G; Ramos, Javier
2009-01-01
The performance of systems where multiple users communicate over wireless fading links benefits from channel-adaptive allocation of the available resources. Different from most existing approaches that allocate resources based on perfect channel state information, this work optimizes channel scheduling along with per user rate and power loadings over orthogonal fading channels, when both terminals and scheduler rely on quantized channel state information. Channel-adaptive policies are designed to optimize an average transmit-performance criterion subject to average quality of service requirements. While the resultant optimal policy per fading realization shows that the individual rate and power loadings can be obtained separately for each user, the optimal scheduling is slightly more complicated. Specifically, per fading realization each channel is allocated either to a single (winner) user, or, to a small group of winner users whose percentage of shared resources is found by solving a linear program. A singl...
Teaching learning based optimization algorithm and its engineering applications
Rao, R Venkata
2016-01-01
Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
A systematic optimization for graphene-based supercapacitors
Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho
2017-08-01
Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF = 0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.
EUD-based biological optimization for carbon ion therapy
Energy Technology Data Exchange (ETDEWEB)
Brüningk, Sarah C., E-mail: sarah.brueningk@icr.ac.uk; Kamp, Florian; Wilkens, Jan J. [Department of Radiation Oncology, Technische Universität München, Klinikum rechts der Isar, Ismaninger Str. 22, München 81675, Germany and Physik-Department, Technische Universität München, James-Franck-Str. 1, Garching 85748 (Germany)
2015-11-15
Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalent uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon
Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization
Directory of Open Access Journals (Sweden)
Na Tian
2015-01-01
Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.
White, Christopher J.; Stone, James M.; Gammie, Charles F.
2016-08-01
We present a new general relativistic magnetohydrodynamics (GRMHD) code integrated into the Athena++ framework. Improving upon the techniques used in most GRMHD codes, ours allows the use of advanced, less diffusive Riemann solvers, in particular HLLC and HLLD. We also employ a staggered-mesh constrained transport algorithm suited for curvilinear coordinate systems in order to maintain the divergence-free constraint of the magnetic field. Our code is designed to work with arbitrary stationary spacetimes in one, two, or three dimensions, and we demonstrate its reliability through a number of tests. We also report on its promising performance and scalability.
Adjoint-based airfoil shape optimization in transonic flow
Gramanzini, Joe-Ray
The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.
Directory of Open Access Journals (Sweden)
Hediyeh Karimi
2013-01-01
Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
Sizing optimization of skeletal structures using teaching-learning based optimization
Directory of Open Access Journals (Sweden)
Vedat Toğan
2017-03-01
Full Text Available Teaching Learning Based Optimization (TLBO is one of the non-traditional techniques to simulate natural phenomena into a numerical algorithm. TLBO mimics teaching learning process occurring between a teacher and students in a classroom. A parameter named as teaching factor, TF, seems to be the only tuning parameter in TLBO. Although the value of the teaching factor, TF, is determined by an equation, the value of 1 or 2 has been used by the researchers for TF. This study intends to explore the effect of the variation of teaching factor TF on the performances of TLBO. This effect is demonstrated in solving structural optimization problems including truss and frame structures under the stress and displacement constraints. The results indicate that the variation of TF in the TLBO process does not change the results obtained at the end of the optimization procedure when the computational cost of TLBO is ignored.
Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis
Directory of Open Access Journals (Sweden)
Sen Zhang
2015-01-01
Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.
Zafar, Ammar
2013-12-01
This paper investigates the energy-efficiency enhancement of a variable-gain dual-hop amplify-and-forward (AF) relay network utilizing selective relaying. The objective is to minimize the total consumed power while keeping the end-to-end signal-to-noise-ratio (SNR) above a certain peak value and satisfying the peak power constraints at the source and relay nodes. To achieve this objective, an optimal relay selection and power allocation strategy is derived by solving the power minimization problem. Numerical results show that the derived optimal strategy enhances the energy-efficiency as compared to a benchmark scheme in which both the source and the selected relay transmit at peak power. © 2013 IEEE.
Structural eigenfrequency optimization based on local sub-domain "frequencies"
DEFF Research Database (Denmark)
Pedersen, Pauli; Pedersen, Niels Leergaard
2013-01-01
eigenfrequencies may also be controlled in this manner.The presented examples are based on 2D finite element models with the use of subspace iteration for analysis and a recursive design procedure based on the derived optimality condition. The design that maximize a frequency depend on the total amount......The engineering approach of fully stressed design is a practical tool with a theoretical foundation. The analog approach to structural eigenfrequency optimization is presented here with its theoretical foundation. A numerical redesign procedure is proposed and illustrated with examples.......For the ideal case, an optimality criterion is fulfilled if the design have the same sub-domain ”frequency” (local Rayleigh quotient). Sensitivity analysis shows an important relation between squared system eigenfrequency and squared local sub-domain frequency for a given eigenmode. Higher order...
Structural robust optimization design based on convex model
Directory of Open Access Journals (Sweden)
Xuyong Chen
Full Text Available There exist a great amount of uncertain factors in actual engineering. In order to involve these uncertain factors in analytical model, they have been expressed as the convex variables. In addition, the convex model was further classified into the hyper-ellipsoidal model and the interval model. After pointing out the intrinsic difference between these two kinds of models, the principle for applying which one of the models within the analysis has been indicated according to the available testing points. After standardizing the convex variables, the difference and relation between these two models for the optimization and solution process have been presented. With the analysis mentality available from the hyper-ellipsoidal model, the basic method about the robust optimization for the interval model was emphasized. After classification of the interval variables within the optimization process, the characteristics of the robust optimization were highlighted with different constraint conditions. Using the target-performance-based analytical scheme, the algorithm, the solution step and the convergence criteria for the robust optimization have been also presented with only one reliability index. Numerical examples and engineering problems were used to demonstrate the effectiveness and correctness of the proposed approach. Keywords: Robust optimization, Non-probabilistic reliability, Interval model, Hyper-ellipsoidal model, Probabilistic index
ENERGY OPTIMIZATION IN CLUSTER BASED WIRELESS SENSOR NETWORKS
Directory of Open Access Journals (Sweden)
T. SHANKAR
2014-04-01
Full Text Available Wireless sensor networks (WSN are made up of sensor nodes which are usually battery-operated devices, and hence energy saving of sensor nodes is a major design issue. To prolong the networks lifetime, minimization of energy consumption should be implemented at all layers of the network protocol stack starting from the physical to the application layer including cross-layer optimization. Optimizing energy consumption is the main concern for designing and planning the operation of the WSN. Clustering technique is one of the methods utilized to extend lifetime of the network by applying data aggregation and balancing energy consumption among sensor nodes of the network. This paper proposed new version of Low Energy Adaptive Clustering Hierarchy (LEACH, protocols called Advanced Optimized Low Energy Adaptive Clustering Hierarchy (AOLEACH, Optimal Deterministic Low Energy Adaptive Clustering Hierarchy (ODLEACH, and Varying Probability Distance Low Energy Adaptive Clustering Hierarchy (VPDL combination with Shuffled Frog Leap Algorithm (SFLA that enables selecting best optimal adaptive cluster heads using improved threshold energy distribution compared to LEACH protocol and rotating cluster head position for uniform energy dissipation based on energy levels. The proposed algorithm optimizing the life time of the network by increasing the first node death (FND time and number of alive nodes, thereby increasing the life time of the network.
Optimal Test Design with Rule-Based Item Generation
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
Optimal test design with rule-based item generation
Geerlings, Hanneke; van der Linden, Willem J.; Glas, Cornelis A.W.
2013-01-01
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the
Optimization based tuning approach for offset free MPC
DEFF Research Database (Denmark)
Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp
2012-01-01
We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system ide...
Economics-based optimal control of greenhouse tomato crop production
Tap, F.
2000-01-01
The design and testing of an optimal control algorithm, based on scientific models of greenhouse and tomato crop and an economic criterion (goal function), to control greenhouse climate, is described. An important characteristic of this control is that it aims at maximising an economic
Optimal Model-Based Control in HVAC Systems
DEFF Research Database (Denmark)
Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik
2008-01-01
This paper presents optimal model-based control of a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger (a rotary wheel heat recovery) and a water-to- air heat exchanger. First dynamic model of the HVAC system...
A Dynamic Programming Approach to Constrained Portfolios
DEFF Research Database (Denmark)
Kraft, Holger; Steffensen, Mogens
2013-01-01
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...... the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions...... to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of intermediate wealth and/or consumption....
Directory of Open Access Journals (Sweden)
Yongpeng Shen
2016-02-01
Full Text Available Auxiliary power units (APUs are widely used for electric power generation in various types of electric vehicles, improvements in fuel economy and emissions of these vehicles directly depend on the operating point of the APUs. In order to balance the conflicting goals of fuel consumption and emissions reduction in the process of operating point choice, the APU operating point optimization problem is formulated as a constrained multi-objective optimization problem (CMOP firstly. The four competing objectives of this CMOP are fuel-electricity conversion cost, hydrocarbon (HC emissions, carbon monoxide (CO emissions and nitric oxide (NO x emissions. Then, the multi-objective particle swarm optimization (MOPSO algorithm and weighted metric decision making method are employed to solve the APU operating point multi-objective optimization model. Finally, bench experiments under New European driving cycle (NEDC, Federal test procedure (FTP and high way fuel economy test (HWFET driving cycles show that, compared with the results of the traditional fuel consumption single-objective optimization approach, the proposed multi-objective optimization approach shows significant improvements in emissions performance, at the expense of a slight drop in fuel efficiency.
Pixel-based ant colony algorithm for source mask optimization
Kuo, Hung-Fei; Wu, Wei-Chen; Li, Frederick
2015-03-01
Source mask optimization (SMO) was considered to be one of the key resolution enhancement techniques for node technology below 20 nm prior to the availability of extreme-ultraviolet tools. SMO has been shown to enlarge the process margins for the critical layer in SRAM and memory cells. In this study, a new illumination shape optimization approach was developed on the basis of the ant colony optimization (ACO) principle. The use of this heuristic pixel-based ACO method in the SMO process provides an advantage over the extant SMO method because of the gradient of the cost function associated with the rapid and stable searching capability of the proposed method. This study was conducted to provide lithographic engineers with references for the quick determination of the optimal illumination shape for complex mask patterns. The test pattern used in this study was a contact layer for SRAM design, with a critical dimension and a minimum pitch of 55 and 110 nm, respectively. The optimized freeform source shape obtained using the ACO method was numerically verified by performing an aerial image investigation, and the result showed that the optimized freeform source shape generated an aerial image profile different from the nominal image profile and with an overall error rate of 9.64%. Furthermore, the overall average critical shape difference was determined to be 1.41, which was lower than that for the other off-axis illumination exposure. The process window results showed an improvement in exposure latitude (EL) and depth of focus (DOF) for the ACO-based freeform source shape compared with those of the Quasar source shape. The maximum EL of the ACO-based freeform source shape reached 7.4% and the DOF was 56 nm at an EL of 5%.
A constrained backpropagation approach for the adaptive solution of partial differential equations.
Rudd, Keith; Di Muro, Gianluca; Ferrari, Silvia
2014-03-01
This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.