Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Sameer Suresh Nanivadekar
2018-03-01
Full Text Available Allocation of channel resources in a cognitive radio system for achieving minimized transmission energy at an increased transmission rate is a challenging research. This paper proposes a resource allocation algorithm based on the meta-heuristic search principle. The proposed algorithm is an improved version of the Group Search Optimizer (GSO, which is a currently developed optimization algorithm that works through imitating the searching behaviour of the animals. The improvement is accomplished through introducing dynamics in the maximum pursuit angle of the GSO members. A cognitive radio system, relying on Orthogonal Frequency Division Multiplexing (OFDM for its operation, is simulated and the experimentations are carried out for sub-channel allocation. The proposed algorithm is experimentally compared with five renowned optimization algorithms, namely, conventional GSO, Particle Swarm Optimization, Genetic Algorithm, Firefly Algorithm and Artificial Bee Colony algorithm. The obtained results assert the competing performance of the proposed algorithm over the other algorithms. Keywords: Cognitive radio, OFDM, Resource, Allocation, Optimization, GSO
Marolt, Klemen
2013-01-01
Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...
Optimization of partial search
Korepin, Vladimir E
2005-01-01
A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm. (letter to the editor)
Optimal intermittent search strategies
Rojo, F; Budde, C E; Wio, H S
2009-01-01
We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion
Optimal intermittent search strategies
Rojo, F; Budde, C E [FaMAF, Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina); Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC E-39005 Santander (Spain)
2009-03-27
We study the search kinetics of a single fixed target by a set of searchers performing an intermittent random walk, jumping between different internal states. Exploiting concepts of multi-state and continuous-time random walks we have calculated the survival probability of a target up to time t, and have 'optimized' (minimized) it with regard to the transition probability among internal states. Our model shows that intermittent strategies always improve target detection, even for simple diffusion states of motion.
Davis, Harold
2006-01-01
SEO--short for Search Engine Optimization--is the art, craft, and science of driving web traffic to web sites. Web traffic is food, drink, and oxygen--in short, life itself--to any web-based business. Whether your web site depends on broad, general traffic, or high-quality, targeted traffic, this PDF has the tools and information you need to draw more traffic to your site. You'll learn how to effectively use PageRank (and Google itself); how to get listed, get links, and get syndicated; and much more. The field of SEO is expanding into all the possible ways of promoting web traffic. This
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
Switching strategies to optimize search
Shlesinger, Michael F
2016-01-01
Search strategies are explored when the search time is fixed, success is probabilistic and the estimate for success can diminish with time if there is not a successful result. Under the time constraint the problem is to find the optimal time to switch a search strategy or search location. Several variables are taken into account, including cost, gain, rate of success if a target is present and the probability that a target is present. (paper: interdisciplinary statistical mechanics)
Optimal Labour Taxation and Search
Boone, J.; Bovenberg, A.L.
2000-01-01
This paper explores the optimal role of the tax system in alleviating labour-market imperfections, raising revenue, and correcting the income distribution. For this purpose, the standard search model of the labour market is extended by introducing non-linear vacancy costs due to scarce
Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and
Automatic Planning of External Search Engine Optimization
Vita Jasevičiūtė
2015-07-01
Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.
Group Counseling Optimization: A Novel Approach
Eita, M. A.; Fahmy, M. M.
A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.
ROLE AND IMPORTANCE OF SEARCH ENGINE OPTIMIZATION
Gurneet Kaur
2017-01-01
Search Engines are an indispensible platform for users all over the globe to search for relevant information online. Search Engine Optimization (SEO) is the exercise of improving the position of a website in search engine rankings, for a chosen set of keywords. SEO is divided into two parts: On-Page and Off-Page SEO. In order to be successful, both the areas require equal attention. This paper aims to explain the functioning of the search engines along with the role and importance of search e...
I-SG : Interactive Search Grouping - Search result grouping using Independent Component Analysis
Lauritsen, Thomas; Kolenda, Thomas
2002-01-01
We present a computational simple and efficient approach to unsupervised grouping the search result from any search engine. Along with each group a set of keywords are found to annotate the contents. This approach leads to an interactive search trough a hierarchial structure that is build online....... It is the users task to improve the search, trough expanding the search query using the topic keywords representing the desired groups. In doing so the search engine limits the space of possible search results, virtually moving down in the search hierarchy, and so refines the search....
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Optimal Aide Security Information Search (OASIS)
Kapadia, Chetna
2005-01-01
The purpose of the Optimal AIDE Security Information Search (OASIS) effort was to investigate and prototype a tool that can assist the network security analyst in collecting useful information to defend the networks they manage...
An introduction to harmony search optimization method
Wang, Xiaolei; Zenger, Kai
2014-01-01
This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche
Competing intelligent search agents in global optimization
Streltsov, S.; Vakili, P. [Boston Univ., MA (United States); Muchnik, I. [Rutgers Univ., Piscataway, NJ (United States)
1996-12-31
In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.
Optimal search behavior and classic foraging theory
Bartumeus, F; Catalan, J
2009-01-01
Random walk methods and diffusion theory pervaded ecological sciences as methods to analyze and describe animal movement. Consequently, statistical physics was mostly seen as a toolbox rather than as a conceptual framework that could contribute to theory on evolutionary biology and ecology. However, the existence of mechanistic relationships and feedbacks between behavioral processes and statistical patterns of movement suggests that, beyond movement quantification, statistical physics may prove to be an adequate framework to understand animal behavior across scales from an ecological and evolutionary perspective. Recently developed random search theory has served to critically re-evaluate classic ecological questions on animal foraging. For instance, during the last few years, there has been a growing debate on whether search behavior can include traits that improve success by optimizing random (stochastic) searches. Here, we stress the need to bring together the general encounter problem within foraging theory, as a mean for making progress in the biological understanding of random searching. By sketching the assumptions of optimal foraging theory (OFT) and by summarizing recent results on random search strategies, we pinpoint ways to extend classic OFT, and integrate the study of search strategies and its main results into the more general theory of optimal foraging.
Towards improving searches for optimal phylogenies.
Ford, Eric; St John, Katherine; Wheeler, Ward C
2015-01-01
Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Efficient search by optimized intermittent random walks
Oshanin, Gleb; Lindenberg, Katja; Wio, Horacio S; Burlatsky, Sergei
2009-01-01
We study the kinetics for the search of an immobile target by randomly moving searchers that detect it only upon encounter. The searchers perform intermittent random walks on a one-dimensional lattice. Each searcher can step on a nearest neighbor site with probability α or go off lattice with probability 1 - α to move in a random direction until it lands back on the lattice at a fixed distance L away from the departure point. Considering α and L as optimization parameters, we seek to enhance the chances of successful detection by minimizing the probability P N that the target remains undetected up to the maximal search time N. We show that even in this simple model, a number of very efficient search strategies can lead to a decrease of P N by orders of magnitude upon appropriate choices of α and L. We demonstrate that, in general, such optimal intermittent strategies are much more efficient than Brownian searches and are as efficient as search algorithms based on random walks with heavy-tailed Cauchy jump-length distributions. In addition, such intermittent strategies appear to be more advantageous than Levy-based ones in that they lead to more thorough exploration of visited regions in space and thus lend themselves to parallelization of the search processes.
Optimal neighborhood indexing for protein similarity search.
Peterlongo, Pierre; Noé, Laurent; Lavenier, Dominique; Nguyen, Van Hoa; Kucherov, Gregory; Giraud, Mathieu
2008-12-16
Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website http://bioinfo.lifl.fr/reblosum. We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction.
Optimal neighborhood indexing for protein similarity search
Nguyen Van
2008-12-01
Full Text Available Abstract Background Similarity inference, one of the main bioinformatics tasks, has to face an exponential growth of the biological data. A classical approach used to cope with this data flow involves heuristics with large seed indexes. In order to speed up this technique, the index can be enhanced by storing additional information to limit the number of random memory accesses. However, this improvement leads to a larger index that may become a bottleneck. In the case of protein similarity search, we propose to decrease the index size by reducing the amino acid alphabet. Results The paper presents two main contributions. First, we show that an optimal neighborhood indexing combining an alphabet reduction and a longer neighborhood leads to a reduction of 35% of memory involved into the process, without sacrificing the quality of results nor the computational time. Second, our approach led us to develop a new kind of substitution score matrices and their associated e-value parameters. In contrast to usual matrices, these matrices are rectangular since they compare amino acid groups from different alphabets. We describe the method used for computing those matrices and we provide some typical examples that can be used in such comparisons. Supplementary data can be found on the website http://bioinfo.lifl.fr/reblosum. Conclusion We propose a practical index size reduction of the neighborhood data, that does not negatively affect the performance of large-scale search in protein sequences. Such an index can be used in any study involving large protein data. Moreover, rectangular substitution score matrices and their associated statistical parameters can have applications in any study involving an alphabet reduction.
PWR loading pattern optimization using Harmony Search algorithm
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Numerical results reveal that the HS method is reliable. ► The great advantage of HS is significant gain in computational cost. ► On the average, the final band width of search fitness values is narrow. ► Our experiments show that the search approaches the optimal value fast. - Abstract: In this paper a core reloading technique using Harmony Search, HS, is presented in the context of finding an optimal configuration of fuel assemblies, FA, in pressurized water reactors. To implement and evaluate the proposed technique a Harmony Search along Nodal Expansion Code for 2-D geometry, HSNEC2D, is developed to obtain nearly optimal arrangement of fuel assemblies in PWR cores. This code consists of two sections including Harmony Search algorithm and Nodal Expansion modules using fourth degree flux expansion which solves two dimensional-multi group diffusion equations with one node per fuel assembly. Two optimization test problems are investigated to demonstrate the HS algorithm capability in converging to near optimal loading pattern in the fuel management field and other subjects. Results, convergence rate and reliability of the method are quite promising and show the HS algorithm performs very well and is comparable to other competitive algorithms such as Genetic Algorithm and Particle Swarm Intelligence. Furthermore, implementation of nodal expansion technique along HS causes considerable reduction of computational time to process and analysis optimization in the core fuel management problems
Optimizing literature search in systematic reviews
Aagaard, Thomas; Lund, Hans; Juhl, Carsten Bogh
2016-01-01
BACKGROUND: When conducting systematic reviews, it is essential to perform a comprehensive literature search to identify all published studies relevant to the specific research question. The Cochrane Collaborations Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidelines...... of musculoskeletal disorders. METHODS: Data sources were systematic reviews published by the Cochrane Musculoskeletal Review Group, including at least five RCTs, reporting a search history, searching MEDLINE, EMBASE, CENTRAL, and adding reference- and hand-searching. Additional databases were deemed eligible...... if they indexed RCTs, were in English and used in more than three of the systematic reviews. Relative recall was calculated as the number of studies identified by the literature search divided by the number of eligible studies i.e. included studies in the individual systematic reviews. Finally, cumulative median...
Ant colony search algorithm for optimal reactive power optimization
Lenin K.
2006-01-01
Full Text Available The paper presents an (ACSA Ant colony search Algorithm for Optimal Reactive Power Optimization and voltage control of power systems. ACSA is a new co-operative agents’ approach, which is inspired by the observation of the behavior of real ant colonies on the topic of ant trial formation and foraging methods. Hence, in the ACSA a set of co-operative agents called "Ants" co-operates to find good solution for Reactive Power Optimization problem. The ACSA is applied for optimal reactive power optimization is evaluated on standard IEEE, 30, 57, 191 (practical test bus system. The proposed approach is tested and compared to genetic algorithm (GA, Adaptive Genetic Algorithm (AGA.
Differential harmony search algorithm to optimize PWRs loading pattern
Poursalehi, N., E-mail: npsalehi@yahoo.com [Engineering Department, Shahid Beheshti University, G.C, P.O.Box: 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A. [Engineering Department, Shahid Beheshti University, G.C, P.O.Box: 1983963113, Tehran (Iran, Islamic Republic of)
2013-04-15
Highlights: ► Exploit of DHS algorithm in LP optimization reveals its flexibility, robustness and reliability. ► Upshot of our experiments with DHS shows that the search approach to optimal LP is quickly. ► On the average, the final band width of DHS fitness values is narrow relative to HS and GHS. -- Abstract: The objective of this work is to develop a core loading optimization technique using differential harmony search algorithm in the context of obtaining an optimal configuration of fuel assemblies in pressurized water reactors. To implement and evaluate the proposed technique, differential harmony search nodal expansion package for 2-D geometry, DHSNEP-2D, is developed. The package includes two modules; in the first modules differential harmony search (DHS) is implemented and nodal expansion code which solves two dimensional-multi group neutron diffusion equations using fourth degree flux expansion with one node per a fuel assembly is in the second module. For evaluation of DHS algorithm, classical harmony search (HS) and global-best harmony search (GHS) algorithms are also included in DHSNEP-2D in order to compare the outcome of techniques together. For this purpose, two PWR test cases have been investigated to demonstrate the DHS algorithm capability in obtaining near optimal loading pattern. Results show that the convergence rate of DHS and execution times are quite promising and also is reliable for the fuel management operation. Moreover, numerical results show the good performance of DHS relative to other competitive algorithms such as genetic algorithm (GA), classical harmony search (HS) and global-best harmony search (GHS) algorithms.
Differential harmony search algorithm to optimize PWRs loading pattern
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.
2013-01-01
Highlights: ► Exploit of DHS algorithm in LP optimization reveals its flexibility, robustness and reliability. ► Upshot of our experiments with DHS shows that the search approach to optimal LP is quickly. ► On the average, the final band width of DHS fitness values is narrow relative to HS and GHS. -- Abstract: The objective of this work is to develop a core loading optimization technique using differential harmony search algorithm in the context of obtaining an optimal configuration of fuel assemblies in pressurized water reactors. To implement and evaluate the proposed technique, differential harmony search nodal expansion package for 2-D geometry, DHSNEP-2D, is developed. The package includes two modules; in the first modules differential harmony search (DHS) is implemented and nodal expansion code which solves two dimensional-multi group neutron diffusion equations using fourth degree flux expansion with one node per a fuel assembly is in the second module. For evaluation of DHS algorithm, classical harmony search (HS) and global-best harmony search (GHS) algorithms are also included in DHSNEP-2D in order to compare the outcome of techniques together. For this purpose, two PWR test cases have been investigated to demonstrate the DHS algorithm capability in obtaining near optimal loading pattern. Results show that the convergence rate of DHS and execution times are quite promising and also is reliable for the fuel management operation. Moreover, numerical results show the good performance of DHS relative to other competitive algorithms such as genetic algorithm (GA), classical harmony search (HS) and global-best harmony search (GHS) algorithms
Search Greedy for radial fuel optimization
Ortiz, J. J.; Castillo, J. A.; Pelta, D. A.
2008-01-01
In this work a search algorithm Greedy is presented for the optimization of fuel cells in reactors BWR. As first phase a study was made of sensibility of the Factor of Pick of Local Power (FPPL) of the cell, in function of the exchange of the content of two fuel rods. His way it could settle down that then the rods to exchange do not contain gadolinium, small changes take place in the value of the FPPL of the cell. This knowledge was applied later in the search Greedy to optimize fuel cell. Exchanges of rods with gadolinium takes as a mechanism of global search and exchanges of rods without gadolinium takes as a method of local search. It worked with a cell of 10x10 rods and 2 circular water channels in center of the same one. From an inventory of enrichments of uranium and concentrations of given gadolinium and one distribution of well-known enrichments; the techniques finds good solutions that the FPPL minimizes, maintaining the factor of multiplication of neutrons in a range appropriate of values. In the low part of the assembly of a lot of recharge of a cycle of 18 months the cells were placed. The values of FPPL of the opposing cells are similar or smaller to those of the original cell and with behaviors in the nucleus also comparable to those obtained with the original cell. The evaluation of the cells was made with the code of transport CASMO-IV and the evaluation of the nucleus was made by means of the one simulator of the nucleus SIMULATE-3. (Author)
A Direct Search Algorithm for Global Optimization
Enrique Baeyens
2016-06-01
Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.
Optimal intermittent search strategies: smelling the prey
Revelli, J A; Wio, H S; Rojo, F; Budde, C E
2010-01-01
We study the kinetics of the search of a single fixed target by a searcher/walker that performs an intermittent random walk, characterized by different states of motion. In addition, we assume that the walker has the ability to detect the scent left by the prey/target in its surroundings. Our results, in agreement with intuition, indicate that the prey's survival probability could be strongly reduced (increased) if the predator is attracted (or repelled) by the trace left by the prey. We have also found that, for a positive trace (the predator is guided towards the prey), increasing the inhomogeneity's size reduces the prey's survival probability, while the optimal value of α (the parameter that regulates intermittency) ceases to exist. The agreement between theory and numerical simulations is excellent.
Optimal intermittent search strategies: smelling the prey
Revelli, J A; Wio, H S [Instituto de Fisica de Cantabria, Universidad de Cantabria and CSIC, E-39005 Santander (Spain); Rojo, F; Budde, C E [Fa.M.A.F., Universidad Nacional de Cordoba, Ciudad Universitaria, X5000HUA Cordoba (Argentina)
2010-05-14
We study the kinetics of the search of a single fixed target by a searcher/walker that performs an intermittent random walk, characterized by different states of motion. In addition, we assume that the walker has the ability to detect the scent left by the prey/target in its surroundings. Our results, in agreement with intuition, indicate that the prey's survival probability could be strongly reduced (increased) if the predator is attracted (or repelled) by the trace left by the prey. We have also found that, for a positive trace (the predator is guided towards the prey), increasing the inhomogeneity's size reduces the prey's survival probability, while the optimal value of {alpha} (the parameter that regulates intermittency) ceases to exist. The agreement between theory and numerical simulations is excellent.
Using heuristic search for optimizing maintenance plans
Mutanen, Teemu
2012-01-01
This work addresses the maintenance action selection process. Maintenance personnel need to evaluate maintenance actions and costs to keep the machines in working condition. Group of actions are evaluated together as maintenance plans. The maintenance plans as output provide information to the user about which actions to take if any and what future actions should be prepared for. The heuristic search method is implemented as part of general use toolbox for analysis of measurements from movable work machines. Impacts from machine's usage restrictions and maintenance activities are analysed. The results show that once put on a temporal perspective, the prioritized order of the actions is different and provide additional information to the user.
Optimal Path Determination for Flying Vehicle to Search an Object
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Design search and optimization in aerospace engineering.
Keane, A J; Scanlan, J P
2007-10-15
In this paper, we take a design-led perspective on the use of computational tools in the aerospace sector. We briefly review the current state-of-the-art in design search and optimization (DSO) as applied to problems from aerospace engineering, focusing on those problems that make heavy use of computational fluid dynamics (CFD). This ranges over issues of representation, optimization problem formulation and computational modelling. We then follow this with a multi-objective, multi-disciplinary example of DSO applied to civil aircraft wing design, an area where this kind of approach is becoming essential for companies to maintain their competitive edge. Our example considers the structure and weight of a transonic civil transport wing, its aerodynamic performance at cruise speed and its manufacturing costs. The goals are low drag and cost while holding weight and structural performance at acceptable levels. The constraints and performance metrics are modelled by a linked series of analysis codes, the most expensive of which is a CFD analysis of the aerodynamics using an Euler code with coupled boundary layer model. Structural strength and weight are assessed using semi-empirical schemes based on typical airframe company practice. Costing is carried out using a newly developed generative approach based on a hierarchical decomposition of the key structural elements of a typical machined and bolted wing-box assembly. To carry out the DSO process in the face of multiple competing goals, a recently developed multi-objective probability of improvement formulation is invoked along with stochastic process response surface models (Krigs). This approach both mitigates the significant run times involved in CFD computation and also provides an elegant way of balancing competing goals while still allowing the deployment of the whole range of single objective optimizers commonly available to design teams.
Optimal taxation and welfare benefits with monitoring of job search
Boone, J.; Bovenberg, A.L.
2013-01-01
In order to investigate the interaction between tax policy, welfare benefits, the government technology for monitoring and sanctioning inadequate search, workfare, and externalities from work, we incorporate endogenous job search and involuntary unemployment into a model of optimal nonlinear income
Jianwen Guo
2016-01-01
Full Text Available All equipment must be maintained during its lifetime to ensure normal operation. Maintenance is one of the critical roles in the success of manufacturing enterprises. This paper proposed a preventive maintenance period optimization model (PMPOM to find an optimal preventive maintenance period. By making use of the advantages of particle swarm optimization (PSO and cuckoo search (CS algorithm, a hybrid optimization algorithm of PSO and CS is proposed to solve the PMPOM problem. The test functions show that the proposed algorithm exhibits more outstanding performance than particle swarm optimization and cuckoo search. Experiment results show that the proposed algorithm has advantages of strong optimization ability and fast convergence speed to solve the PMPOM problem.
Intelligent Search Optimization using Artificial Fuzzy Logics
Manral, Jai
2015-01-01
Information on the web is prodigious; searching relevant information is difficult making web users to rely on search engines for finding relevant information on the web. Search engines index and categorize web pages according to their contents using crawlers and rank them accordingly. For given user query they retrieve millions of webpages and display them to users according to web-page rank. Every search engine has their own algorithms based on certain parameters for ranking web-pages. Searc...
[AWAKE CRANIOTOMY: IN SEARCH FOR OPTIMAL SEDATION].
Kulikova, A S; Sel'kov, D A; Kobyakov, G L; Shmigel'skiy, A V; Lubnin, A Yu
2015-01-01
Awake craniotomy is a "gold standard"for intraoperative brain language mapping. One of the main anesthetic challenge of awake craniotomy is providing of optimal sedation for initial stages of intervention. The goal of this study was comparison of different technics of anesthesia for awake craniotomy. Materials and methods: 162 operations were divided in 4 groups: 76 cases with propofol sedation (2-4mg/kg/h) without airway protection; 11 cases with propofol sedation (4-5 mg/kg/h) with MV via LMA; 36 cases of xenon anesthesia; and 39 cases with dexmedetomidine sedation without airway protection. Results and discussion: brain language mapping was successful in 90% of cases. There was no difference between groups in successfulness of brain mapping. However in the first group respiratory complications were more frequent. Three other technics were more safer Xenon anesthesia was associated with ultrafast awakening for mapping (5±1 min). Dexmedetomidine sedation provided high hemodynamic and respiratory stability during the procedure.
An Innovative Approach for online Meta Search Engine Optimization
Manral, Jai; Hossain, Mohammed Alamgir
2015-01-01
This paper presents an approach to identify efficient techniques used in Web Search Engine Optimization (SEO). Understanding SEO factors which can influence page ranking in search engine is significant for webmasters who wish to attract large number of users to their website. Different from previous relevant research, in this study we developed an intelligent Meta search engine which aggregates results from various search engines and ranks them based on several important SEO parameters. The r...
Optimal Taxation with On-the-Job Search
Bagger, Jesper; Moen, Espen R.; Vejlin, Rune Majlund
We study the optimal taxation of labor income in the presence of search frictions. Heterogeneous workers undertake costly search off- and on-the-job in order to locate more productive jobs that pay higher wages. More productive workers search harder, resulting in equilibrium sorting where low......-type workers are overrepresented in low-wage jobs while high-type workers are overrepresented in high-wage jobs. Absent taxes, worker search effort is efficient, because the social and private gains from search coincide. The optimal tax system balance efficiency and equity concerns at the margin. Equity...... concerns make it desirable to levy low taxes on (or indeed, subsidize) low-wage jobs including unemployment, and levy high taxes on high-wage jobs. Efficiency concerns limit how much taxes an optimal tax system levy on high-paid jobs, as high taxes distort the workers' incentives to search. The model...
Age grouping to optimize augmentation success.
Gordon, Robert W
2010-05-01
This article has described the different age groups that present for noninvasive injectable lip and perioral augmentation, as well as the breakdown of 3 subgroups that present within the 4 general age groups. With the fundamental understanding of these presenting groups and subgroups, the practicing augmenter will be able to better treatment plan and educate the patient on realistic and optimal aesthetic outcomes.
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
Abramson, Mark A; Audet, Charles; Dennis, Jr, J. E
2004-01-01
.... This class combines and extends the Audet-Dennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPS-filter algorithms for general nonlinear constraints...
Optimizing Event Selection with the Random Grid Search
Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge
2017-06-29
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.
Optimal Semi-Adaptive Search With False Targets
2017-12-01
Kress, K. Y. Lin, and R. Szechtman, “Optimal discrete search with imperfect specificity,” Math Meth Oper Res, vol. 68, pp. 539–549, 2008. [16] L. D...constraints on employment of physical search assets will involve discrete approximations to the continuous solutions given by these techniques. These...model assumes. We optimize in the continuous case, to be able then to make the best possible discrete approximations if needed, given the constraints of a
Optimal Target Stars in the Search for Life
Lingam, Manasvi; Loeb, Abraham
2018-04-01
The selection of optimal targets in the search for life represents a highly important strategic issue. In this Letter, we evaluate the benefits of searching for life around a potentially habitable planet orbiting a star of arbitrary mass relative to a similar planet around a Sun-like star. If recent physical arguments implying that the habitability of planets orbiting low-mass stars is selectively suppressed are correct, we find that planets around solar-type stars may represent the optimal targets.
Searching for Cost-Optimized Interstellar Beacons
Benford, Gregory; Benford, James; Benford, Dominic
2010-06-01
What would SETI beacon transmitters be like if built by civilizations that had a variety of motives but cared about cost? In a companion paper, we presented how, for fixed power density in the far field, a cost-optimum interstellar beacon system could be built. Here, we consider how we should search for a beacon if it were produced by a civilization similar to ours. High-power transmitters could be built for a wide variety of motives other than the need for two-way communication; this would include beacons built to be seen over thousands of light-years. Extraterrestrial beacon builders would likely have to contend with economic pressures just as their terrestrial counterparts do. Cost, spectral lines near 1 GHz, and interstellar scintillation favor radiating frequencies substantially above the classic "water hole." Therefore, the transmission strategy for a distant, cost-conscious beacon would be a rapid scan of the galactic plane with the intent to cover the angular space. Such pulses would be infrequent events for the receiver. Such beacons built by distant, advanced, wealthy societies would have very different characteristics from what SETI researchers seek. Future searches should pay special attention to areas along the galactic disk where SETI searches have seen coherent signals that have not recurred on the limited listening time intervals we have used. We will need to wait for recurring events that may arriarrive in intermittent bursts. Several new SETI search strategies have emerged from these ideas. We propose a new test for beacons that is based on the Life Plane hypotheses.
Report of the 1997 LEP2 working group on 'searches'
Allanach, B.C.; Blair, G.A.; Diaz, M.A.
1997-08-01
A number of research program reports are presented from the LEP2 positron-electron collider in the area of searches for Higgs bosons, supersymmetry and supergravity. Working groups' reports cover prospective sensitivity of Higgs boson searches, radiative corrections to chargino production, charge and colour breaking minima in minimal Supersymmetric Standard Model, R-party violation effects upon unification predictions, searches for new pair-produced particles, single sneutrino production and searches related to effects similar to HERA experiments. The final section of the report summarizes the LEP 2 searches, concentrating on gians from running at 200 GeV and alternative paradigms for supersymmetric phenomenology. (UK)
LETTER TO THE EDITOR: Optimization of partial search
Korepin, Vladimir E.
2005-11-01
A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.
The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...
Software for the grouped optimal aggregation technique
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Tailoring group velocity by topology optimization
Stainko, Roman; Sigmund, Ole
2007-01-01
The paper describes a systematic method for the tailoring of dispersion properties of slab-based photonic crystal waveguides. The method is based on the topology optimization method which consists in repeated finite element frequency domain analyses. The goal of the optimization process is to come...... up with slow light, zero group velocity dispersion photonic waveguides or photonic waveguides with tailored dispersion properties for dispersion compensation purposes. An example concerning the design of a wide bandwidth, constant low group velocity waveguide demonstrate the e±ciency of the method....
Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History
Danping Wang
2017-01-01
Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.
Optimizing the search for transiting planets in long time series
Ofir, Aviv
2014-01-01
Context. Transit surveys, both ground- and space-based, have already accumulated a large number of light curves that span several years. Aims: The search for transiting planets in these long time series is computationally intensive. We wish to optimize the search for both detection and computational efficiencies. Methods: We assume that the searched systems can be described well by Keplerian orbits. We then propagate the effects of different system parameters to the detection parameters. Results: We show that the frequency information content of the light curve is primarily determined by the duty cycle of the transit signal, and thus the optimal frequency sampling is found to be cubic and not linear. Further optimization is achieved by considering duty-cycle dependent binning of the phased light curve. By using the (standard) BLS, one is either fairly insensitive to long-period planets or less sensitive to short-period planets and computationally slower by a significant factor of ~330 (for a 3 yr long dataset). We also show how the physical system parameters, such as the host star's size and mass, directly affect transit detection. This understanding can then be used to optimize the search for every star individually. Conclusions: By considering Keplerian dynamics explicitly rather than implicitly one can optimally search the BLS parameter space. The presented Optimal BLS enhances the detectability of both very short and very long period planets, while allowing such searches to be done with much reduced resources and time. The Matlab/Octave source code for Optimal BLS is made available. The MATLAB code is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/561/A138
Local beam angle optimization with linear programming and gradient search
Craft, David
2007-01-01
The optimization of beam angles in IMRT planning is still an open problem, with literature focusing on heuristic strategies and exhaustive searches on discrete angle grids. We show how a beam angle set can be locally refined in a continuous manner using gradient-based optimization in the beam angle space. The gradient is derived using linear programming duality theory. Applying this local search to 100 random initial angle sets of a phantom pancreatic case demonstrates the method, and highlights the many-local-minima aspect of the BAO problem. Due to this function structure, we recommend a search strategy of a thorough global search followed by local refinement at promising beam angle sets. Extensions to nonlinear IMRT formulations are discussed. (note)
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Search optimization of named entities from twitter streams
Fazeel, K. Mohammed; Hassan Mottur, Simama; Norman, Jasmine; Mangayarkarasi, R.
2017-11-01
With Enormous number of tweets, People often face difficulty to get exact information about those tweets. One of the approach followed for getting information about those tweets via Google. There is not any accuracy tool developed for search optimization and as well as getting information about those tweets. So, this system contains the search optimization and functionalities for getting information about those tweets. Another problem faced here are the tweets that contains grammatical errors, misspellings, non-standard abbreviations, and meaningless capitalization. So, these problems can be eliminated by the use of this tool. Lot of time can be saved and as well as by the use of efficient search optimization each information about those particular tweets can be obtained.
Decoherence in optimized quantum random-walk search algorithm
Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun
2015-01-01
This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. (paper)
A Fuzzy Gravitational Search Algorithm to Design Optimal IIR Filters
Danilo Pelusi
2018-03-01
Full Text Available The goodness of Infinite Impulse Response (IIR digital filters design depends on pass band ripple, stop band ripple and transition band values. The main problem is defining a suitable error fitness function that depends on these parameters. This fitness function can be optimized by search algorithms such as evolutionary algorithms. This paper proposes an intelligent algorithm for the design of optimal 8th order IIR filters. The main contribution is the design of Fuzzy Inference Systems able to tune key parameters of a revisited version of the Gravitational Search Algorithm (GSA. In this way, a Fuzzy Gravitational Search Algorithm (FGSA is designed. The optimization performances of FGSA are compared with those of Differential Evolution (DE and GSA. The results show that FGSA is the algorithm that gives the best compromise between goodness, robustness and convergence rate for the design of 8th order IIR filters. Moreover, FGSA assures a good stability of the designed filters.
Topology optimization based on the harmony search method
Lee, Seung-Min; Han, Seog-Young
2017-01-01
A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.
Topology optimization based on the harmony search method
Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)
2017-06-15
A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.
Budget constraints and optimization in sponsored search auctions
Yang, Yanwu
2013-01-01
The Intelligent Systems Series publishes reference works and handbooks in three core sub-topic areas: Intelligent Automation, Intelligent Transportation Systems, and Intelligent Computing. They include theoretical studies, design methods, and real-world implementations and applications. The series' readership is broad, but focuses on engineering, electronics, and computer science. Budget constraints and optimization in sponsored search auctions takes into account consideration of the entire life cycle of campaigns for researchers and developers working on search systems and ROI maximization
Stochastic search in structural optimization - Genetic algorithms and simulated annealing
Hajela, Prabhat
1993-01-01
An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Younes Saadi
Full Text Available The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive and exploitation (intensive of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be
Wolf Search Algorithm for Solving Optimal Reactive Power Dispatch Problem
Kanagasabai Lenin
2015-03-01
Full Text Available This paper presents a new bio-inspired heuristic optimization algorithm called the Wolf Search Algorithm (WSA for solving the multi-objective reactive power dispatch problem. Wolf Search algorithm is a new bio – inspired heuristic algorithm which based on wolf preying behaviour. The way wolves search for food and survive by avoiding their enemies has been imitated to formulate the algorithm for solving the reactive power dispatches. And the speciality of wolf is possessing both individual local searching ability and autonomous flocking movement and this special property has been utilized to formulate the search algorithm .The proposed (WSA algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the good performance of the proposed algorithm .
Optimization by GRASP greedy randomized adaptive search procedures
Resende, Mauricio G C
2016-01-01
This is the first book to cover GRASP (Greedy Randomized Adaptive Search Procedures), a metaheuristic that has enjoyed wide success in practice with a broad range of applications to real-world combinatorial optimization problems. The state-of-the-art coverage and carefully crafted pedagogical style lends this book highly accessible as an introductory text not only to GRASP, but also to combinatorial optimization, greedy algorithms, local search, and path-relinking, as well as to heuristics and metaheuristics, in general. The focus is on algorithmic and computational aspects of applied optimization with GRASP with emphasis given to the end-user, providing sufficient information on the broad spectrum of advances in applied optimization with GRASP. For the more advanced reader, chapters on hybridization with path-relinking and parallel and continuous GRASP present these topics in a clear and concise fashion. Additionally, the book offers a very complete annotated bibliography of GRASP and combinatorial optimizat...
Complicated problem solution techniques in optimal parameter searching
Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.
1992-01-01
An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs
Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li
2006-02-24
Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.
PR Students' Perceptions and Readiness for Using Search Engine Optimization
Moody, Mia; Bates, Elizabeth
2013-01-01
Enough evidence is available to support the idea that public relations professionals must possess search engine optimization (SEO) skills to assist clients in a full-service capacity; however, little research exists on how much college students know about the tactic and best practices for incorporating SEO into course curriculum. Furthermore, much…
A Competitive and Experiential Assignment in Search Engine Optimization Strategy
Clarke, Theresa B.; Clarke, Irvine, III
2014-01-01
Despite an increase in ad spending and demand for employees with expertise in search engine optimization (SEO), methods for teaching this important marketing strategy have received little coverage in the literature. Using Bloom's cognitive goals hierarchy as a framework, this experiential assignment provides a process for educators who may be new…
ARSTEC, Nonlinear Optimization Program Using Random Search Method
Rasmuson, D. M.; Marshall, N. H.
1979-01-01
1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays
Theory of Randomized Search Heuristics in Combinatorial Optimization
The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS), the Metr......The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS...... analysis of randomized algorithms to RSHs. Mostly, the expected runtime of RSHs on selected problems is analzyed. Thereby, we understand why and when RSHs are efficient optimizers and, conversely, when they cannot be efficient. The tutorial will give an overview on the analysis of RSHs for solving...
A Hybrid Backtracking Search Optimization Algorithm with Differential Evolution
Lijin Wang
2015-01-01
Full Text Available The backtracking search optimization algorithm (BSA is a new nature-inspired method which possesses a memory to take advantage of experiences gained from previous generation to guide the population to the global optimum. BSA is capable of solving multimodal problems, but it slowly converges and poorly exploits solution. The differential evolution (DE algorithm is a robust evolutionary algorithm and has a fast convergence speed in the case of exploitive mutation strategies that utilize the information of the best solution found so far. In this paper, we propose a hybrid backtracking search optimization algorithm with differential evolution, called HBD. In HBD, DE with exploitive strategy is used to accelerate the convergence by optimizing one worse individual according to its probability at each iteration process. A suit of 28 benchmark functions are employed to verify the performance of HBD, and the results show the improvement in effectiveness and efficiency of hybridization of BSA and DE.
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.
2013-01-01
Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems
Optimal Quantum Spatial Search on Random Temporal Networks.
Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser
2017-12-01
To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G(n,p), where p is the probability that any two given nodes are connected: After every time interval τ, a new graph G(n,p) replaces the previous one. We prove analytically that, for any given p, there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O(sqrt[n]), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.
Optimal Quantum Spatial Search on Random Temporal Networks
Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser
2017-12-01
To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.
A Cooperative Harmony Search Algorithm for Function Optimization
Gang Li
2014-01-01
Full Text Available Harmony search algorithm (HS is a new metaheuristic algorithm which is inspired by a process involving musical improvisation. HS is a stochastic optimization technique that is similar to genetic algorithms (GAs and particle swarm optimizers (PSOs. It has been widely applied in order to solve many complex optimization problems, including continuous and discrete problems, such as structure design, and function optimization. A cooperative harmony search algorithm (CHS is developed in this paper, with cooperative behavior being employed as a significant improvement to the performance of the original algorithm. Standard HS just uses one harmony memory and all the variables of the object function are improvised within the harmony memory, while the proposed algorithm CHS uses multiple harmony memories, so that each harmony memory can optimize different components of the solution vector. The CHS was then applied to function optimization problems. The results of the experiment show that CHS is capable of finding better solutions when compared to HS and a number of other algorithms, especially in high-dimensional problems.
Artificial intelligence search techniques for optimization of the cold source geometry
Azmy, Y.Y.
1988-01-01
Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness which produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometrical shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape which is the unknown in such a study. We draw an analogy between this problem and a state space search, then we use a simple Artificial Intelligence (AI) search technique to determine the optimum cold source shape based on a two-group, r-z diffusion model. We implemented this AI design concept in the computer program AID which consists of two modules, a physical model module and a search module, which can be independently modified, improved, or made more sophisticated. 7 refs., 1 fig
Artificial intelligence search techniques for the optimization of cold source geometry
Azmy, Y.Y.
1988-01-01
Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness that produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometric shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape, which is the unknown in such a study. An analogy is drawn between this problem and a state space search, then a simple artificial intelligence (AI) search technique is used to determine the optimum cold source shape based on a two-group, r-z diffusion model. This AI design concept was implemented in the computer program AID, which consists of two modules, a physical model module, and a search module, which can be independently modified, improved, or made more sophisticated
Chan, Apple L.S.; Hanby, Vic I.; Chow, T.T.
2007-01-01
A district cooling system is a sustainable means of distribution of cooling energy through mass production. A cooling medium like chilled water is generated at a central refrigeration plant and supplied to serve a group of consumer buildings through a piping network. Because of the substantial capital investment involved, an optimal design of the distribution piping configuration is one of the crucial factors for successful implementation of the district cooling scheme. In the present study, genetic algorithm (GA) incorporated with local search techniques was developed to find the optimal/near optimal configuration of the piping network in a hypothetical site. The effect of local search, mutation rate and frequency of local search on the performance of the GA in terms of both solution quality and computation time were investigated and presented in this paper
Parallel Harmony Search Based Distributed Energy Resource Optimization
Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)
2015-01-01
This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.
An Elite Decision Making Harmony Search Algorithm for Optimization Problem
Lipu Zhang
2012-01-01
Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.
Multilevel Thresholding Segmentation Based on Harmony Search Optimization
Diego Oliva
2013-01-01
Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.
Tabu search, a versatile technique for the functions optimization
Castillo M, J.A.
2003-01-01
The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)
Optimal Search for an Astrophysical Gravitational-Wave Background
Rory Smith; Eric Thrane
2018-01-01
Roughly every 2–10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using M...
Optimal income taxation with endogenous participation and search unemployment
Lehmann, Etienne; Parmentier, Alexis; van der Linden, Bruno
2011-01-01
This paper characterizes the optimal redistributive taxation when individuals are hetero- geneous in two exogenous dimensions: their skills and their values of non-market activities. Search-matching frictions on the labor markets create unemployment. Wages, labor demand and participation are endogenous. The government only observes wage levels. Under a Max- imin objective, if the elasticity of participation decreases along the distribution of skills, at the optimum, the average tax rate is in...
ON range searching in the group model and combinatorial discrepancy
Larsen, Kasper Green
2014-01-01
In this paper we establish an intimate connection between dynamic range searching in the group model and combinatorial discrepancy. Our result states that, for a broad class of range searching data structures (including all known upper bounds), it must hold that $t_u t_q=\\Omega(\\mbox{disc}^2......)$, where $t_u$ is the worst case update time, $t_q$ is the worst case query time, and disc is the combinatorial discrepancy of the range searching problem in question. This relation immediately implies a whole range of exceptionally high and near-tight lower bounds for all of the basic range searching...... problems. We list a few of them in the following: (1) For $d$-dimensional halfspace range searching, we get a lower bound of $t_u t_q=\\Omega(n^{1-1/d})$. This comes within an lg lg $n$ factor of the best known upper bound. (2) For orthogonal range searching, we get a lower bound of $t_u t...
Optimal Search for an Astrophysical Gravitational-Wave Background
Smith, Rory; Thrane, Eric
2018-04-01
Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.
Optimal Search for an Astrophysical Gravitational-Wave Background
Rory Smith
2018-04-01
Full Text Available Roughly every 2–10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both “safe” and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources, simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.
Gravitation search algorithm: Application to the optimal IIR filter design
Suman Kumar Saha
2014-01-01
Full Text Available This paper presents a global heuristic search optimization technique known as Gravitation Search Algorithm (GSA for the design of 8th order Infinite Impulse Response (IIR, low pass (LP, high pass (HP, band pass (BP and band stop (BS filters considering various non-linear characteristics of the filter design problems. This paper also adopts a novel fitness function in order to improve the stop band attenuation to a great extent. In GSA, law of gravity and mass interactions among different particles are adopted for handling the non-linear IIR filter design optimization problem. In this optimization technique, searcher agents are the collection of masses and interactions among them are governed by the Newtonian gravity and the laws of motion. The performances of the GSA based IIR filter designs have proven to be superior as compared to those obtained by real coded genetic algorithm (RGA and standard Particle Swarm Optimization (PSO. Extensive simulation results affirm that the proposed approach using GSA outperforms over its counterparts not only in terms of quality output, i.e., sharpness at cut-off, smaller pass band ripple, higher stop band attenuation, but also the fastest convergence speed with assured stability.
A Hybrid Harmony Search Algorithm Approach for Optimal Power Flow
Mimoun YOUNES
2012-08-01
Full Text Available Optimal Power Flow (OPF is one of the main functions of Power system operation. It determines the optimal settings of generating units, bus voltage, transformer tap and shunt elements in Power System with the objective of minimizing total production costs or losses while the system is operating within its security limits. The aim of this paper is to propose a novel methodology (BCGAs-HSA that solves OPF including both active and reactive power dispatch It is based on combining the binary-coded genetic algorithm (BCGAs and the harmony search algorithm (HSA to determine the optimal global solution. This method was tested on the modified IEEE 30 bus test system. The results obtained by this method are compared with those obtained with BCGAs or HSA separately. The results show that the BCGAs-HSA approach can converge to the optimum solution with accuracy compared to those reported recently in the literature.
Yunker, James
2003-01-01
In this report, a relatively new simulation optimization technique, the genetic search, is compared to two more established simulation techniques-the pattern search and the response surface methodology search...
Sampling optimization for printer characterization by direct search.
Bianco, Simone; Schettini, Raimondo
2012-12-01
Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.
Genetic evolutionary taboo search for optimal marker placement in infrared patient setup
Riboldi, M; Baroni, G; Spadea, M F; Tagaste, B; Garibaldi, C; Cambria, R; Orecchia, R; Pedotti, A
2007-01-01
In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2014-01-01
A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants. PMID:24723827
Optimizing searches for electromagnetic counterparts of gravitational wave triggers
Coughlin, Michael W.; Tao, Duo; Chan, Man Leong; Chatterjee, Deep; Christensen, Nelson; Ghosh, Shaon; Greco, Giuseppe; Hu, Yiming; Kapadia, Shasvath; Rana, Javed; Salafia, Om Sharan; Stubbs11, Christopher
2018-04-01
With the detection of a binary neutron star system and its corresponding electromagnetic counterparts, a new window of transient astronomy has opened. Due to the size of the sky localization regions, which can span hundreds to thousands of square degrees, there are significant benefits to optimizing tilings for these large sky areas. The rich science promised by gravitational-wave astronomy has led to the proposal for a variety of proposed tiling and time allocation schemes, and for the first time, we make a systematic comparison of some of these methods. We find that differences of a factor of 2 or more in efficiency are possible, depending on the algorithm employed. For this reason, with future surveys searching for electromagnetic counterparts, care should be taken when selecting tiling, time allocation, and scheduling algorithms to optimize counterpart detection.
Novel Back Propagation Optimization by Cuckoo Search Algorithm
Jiao-hong Yi
2014-01-01
Full Text Available The traditional Back Propagation (BP has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS, called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN. Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way.
Adaptive symbiotic organisms search (SOS algorithm for structural design optimization
Ghanshyam G. Tejani
2016-07-01
Full Text Available The symbiotic organisms search (SOS algorithm is an effective metaheuristic developed in 2014, which mimics the symbiotic relationship among the living beings, such as mutualism, commensalism, and parasitism, to survive in the ecosystem. In this study, three modified versions of the SOS algorithm are proposed by introducing adaptive benefit factors in the basic SOS algorithm to improve its efficiency. The basic SOS algorithm only considers benefit factors, whereas the proposed variants of the SOS algorithm, consider effective combinations of adaptive benefit factors and benefit factors to study their competence to lay down a good balance between exploration and exploitation of the search space. The proposed algorithms are tested to suit its applications to the engineering structures subjected to dynamic excitation, which may lead to undesirable vibrations. Structure optimization problems become more challenging if the shape and size variables are taken into account along with the frequency. To check the feasibility and effectiveness of the proposed algorithms, six different planar and space trusses are subjected to experimental analysis. The results obtained using the proposed methods are compared with those obtained using other optimization methods well established in the literature. The results reveal that the adaptive SOS algorithm is more reliable and efficient than the basic SOS algorithm and other state-of-the-art algorithms.
Optimizing EDELWEISS detectors for low-mass WIMP searches
Arnaud, Q.; Armengaud, E.; Augier, C.; Benoît, A.; Bergé, L.; Billard, J.; Broniatowski, A.; Camus, P.; Cazes, A.; Chapellier, M.; Charlieux, F.; de Jésus, M.; Dumoulin, L.; Eitel, K.; Foerster, N.; Gascon, J.; Giuliani, A.; Gros, M.; Hehn, L.; Jin, Y.; Juillard, A.; Kleifges, M.; Kozlov, V.; Kraus, H.; Kudryavtsev, V. A.; Le-Sueur, H.; Maisonobe, R.; Marnieros, S.; Navick, X.-F.; Nones, C.; Olivieri, E.; Pari, P.; Paul, B.; Poda, D.; Queguiner, E.; Rozov, S.; Sanglard, V.; Scorza, S.; Siebenborn, B.; Vagneron, L.; Weber, M.; Yakushev, E.; EDELWEISS Collaboration
2018-01-01
The physics potential of EDELWEISS detectors for the search of low-mass weakly interacting massive particles (WIMPs) is studied. Using a data-driven background model, projected exclusion limits are computed using frequentist and multivariate analysis approaches, namely, profile likelihood and boosted decision tree. Both current and achievable experimental performances are considered. The optimal strategy for detector optimization depends critically on whether the emphasis is put on WIMP masses below or above ˜5 GeV /c2 . The projected sensitivity for the next phase of the EDELWEISS-III experiment at the Modane Underground Laboratory (LSM) for low-mass WIMP search is presented. By 2018 an upper limit on the spin-independent WIMP-nucleon cross section of σSI=7 ×10-42 cm2 is expected for a WIMP mass in the range 2 - 5 GeV /c2 . The requirements for a future hundred-kilogram-scale experiment designed to reach the bounds imposed by the coherent scattering of solar neutrinos are also described. By improving the ionization resolution down to 50 eVe e , we show that such an experiment installed in an even lower background environment (e.g., at SNOLAB) together with an exposure of 1 000 kg .yr , should allow us to observe about 80 B 8 neutrino events after discrimination.
A DE-Based Scatter Search for Global Optimization Problems
Kun Li
2015-01-01
Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.
A search for southern ultracool dwarfs in young moving groups
Deacon N.R.
2011-07-01
Full Text Available We have constructed an 800-strong red object catalogue by cross-referencing optical and infrared catalogues with an extensive proper motion catalogue compiled for red objects in the southern sky to obtain proper motions. We have applied astrometric and photometric constraints to the catalogue in order to select ultracool dwarf moving group candidates. 132 objects were found to be candidates of a moving group. From this candidate list we present initial results. Using spectroscopy we have obtained reliable spectral types and space motions, and by association with moving groups we can infer an age and composition. the further study of the remainder of our candidates will provide a large sample of young brown dwarfs and confirmed members will provide benchmark ultracool dwarfs. These will make suitable targets of AO planet searches.
A. P. Karpenko
2014-01-01
Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.
A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization
Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.
2016-01-01
Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.
Search for Dark Matter Annihilation in Galaxy Groups.
Lisanti, Mariangela; Mishra-Sharma, Siddharth; Rodd, Nicholas L; Safdi, Benjamin R
2018-03-09
We use 413 weeks of publicly available Fermi Pass 8 gamma-ray data combined with recently developed galaxy group catalogs to search for evidence of dark matter annihilation in extragalactic halos. In our study, we use luminosity-based mass estimates and mass-to-concentration relations to infer the J factors and associated uncertainties for hundreds of galaxy groups within a redshift range z≲0.03. We employ a conservative substructure boost factor model, which only enhances the sensitivity by an O(1) factor. No significant evidence for dark matter annihilation is found, and we exclude thermal relic cross sections for dark matter masses below ∼30 GeV to 95% confidence in the bb[over ¯] annihilation channel. These bounds are comparable to those from Milky Way dwarf spheroidal satellite galaxies. The results of our analysis increase the tension but do not rule out the dark matter interpretation of the Galactic Center excess. We provide a catalog of the galaxy groups used in this study and their inferred properties, which can be broadly applied to searches for extragalactic dark matter.
Search for Dark Matter Annihilation in Galaxy Groups
Lisanti, Mariangela; Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.
2018-03-01
We use 413 weeks of publicly available Fermi Pass 8 gamma-ray data combined with recently developed galaxy group catalogs to search for evidence of dark matter annihilation in extragalactic halos. In our study, we use luminosity-based mass estimates and mass-to-concentration relations to infer the J factors and associated uncertainties for hundreds of galaxy groups within a redshift range z ≲0.03 . We employ a conservative substructure boost factor model, which only enhances the sensitivity by an O (1 ) factor. No significant evidence for dark matter annihilation is found, and we exclude thermal relic cross sections for dark matter masses below ˜30 GeV to 95% confidence in the b b ¯ annihilation channel. These bounds are comparable to those from Milky Way dwarf spheroidal satellite galaxies. The results of our analysis increase the tension but do not rule out the dark matter interpretation of the Galactic Center excess. We provide a catalog of the galaxy groups used in this study and their inferred properties, which can be broadly applied to searches for extragalactic dark matter.
Search for evidence of source event grouping among ureilites
Beard, S. P.; Swindle, T. D.
2017-11-01
We use cosmic-ray exposure (CRE) ages of ureilites, combined with magnesium numbers of olivine, and oxygen isotopes, to search for evidence of specific source events initiating exposure for groups of ureilites. This technique can also be used to investigate the heterogeneity of the body from which the samples were derived. There are a total of 39 ureilites included in our work, which represents the largest collection of ureilite CRE age data used to date. Although we find some evidence of possible clusters, it is clear that most ureilites did not originate in one or two events on a homogeneous parent body.
Wu, Xia; Wu, Genhua
2014-01-01
Highlights: • A high efficient method for optimization of atomic clusters is developed. • Its performance is studied by optimizing Lennard-Jones clusters and Ag clusters. • The method is proved to be quite efficient. • A new Ag 61 cluster with stacking-fault face-centered cubic motif is found. - Abstract: Geometrical optimization of atomic clusters is performed by a development of adaptive immune optimization algorithm (AIOA) with dynamic lattice searching (DLS) operation (AIOA-DLS method). By a cycle of construction and searching of the dynamic lattice (DL), DLS algorithm rapidly makes the clusters more regular and greatly reduces the potential energy. DLS can thus be used as an operation acting on the new individuals after mutation operation in AIOA to improve the performance of the AIOA. The AIOA-DLS method combines the merit of evolutionary algorithm and idea of dynamic lattice. The performance of the proposed method is investigated in the optimization of Lennard-Jones clusters within 250 atoms and silver clusters described by many-body Gupta potential within 150 atoms. Results reported in the literature are reproduced, and the motif of Ag 61 cluster is found to be stacking-fault face-centered cubic, whose energy is lower than that of previously obtained icosahedron
Wastewater Treatment Optimization for Fish Migration Using Harmony Search
Zong Woo Geem
2014-01-01
Full Text Available Certain types of fish migrate between the sea and fresh water to spawn. In order for them to swim without any breathing problem, river should contain enough oxygen. If fish is passing along the river in municipal area, it needs sufficient dissolved oxygen level which is influenced by dumped amount of wastewater into the river. If existing treatment methods such as settling and biological oxidation are not enough, we have to consider additional treatment methods such as microscreening filtration and nitrification. This study constructed a wastewater treatment optimization model for migratory fish, which considers three costs (filtration cost, nitrification cost, and irrigation cost and two environmental constraints (minimal dissolved oxygen level and maximal nitrate-nitrogen concentration. Results show that the metaheuristic technique such as harmony search could find good solutions robustly while calculus-based technique such as generalized reduced gradient method was trapped in local optima or even divergent.
System identification using Nuclear Norm & Tabu Search optimization
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Use of search engine optimization factors for Google page rank prediction
Tvrdi, Barbara
2012-01-01
Over the years, search engines have become an important tool for finding information. It is known that users select the link on the first page of search results in 62% of the cases. Search engine optimization techniques enable website improvement and therefore a better ranking in search engines. The exact specification of the factors that affect website ranking is not disclosed by search engine owners. In this thesis we tried to choose some most frequently mentioned search engine optimizatio...
Stochastic search, optimization and regression with energy applications
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression
Retrospective group fusion similarity search based on eROCE evaluation metric.
Avram, Sorin I; Crisan, Luminita; Bora, Alina; Pacureanu, Liliana M; Avram, Stefana; Kurunczi, Ludovic
2013-03-01
In this study, a simple evaluation metric, denoted as eROCE was proposed to measure the early enrichment of predictive methods. We demonstrated the superior robustness of eROCE compared to other known metrics throughout several active to inactive ratios ranging from 1:10 to 1:1000. Group fusion similarity search was investigated by varying 16 similarity coefficients, five molecular representations (binary and non-binary) and two group fusion rules using two reference structure set sizes. We used a dataset of 3478 actives and 43,938 inactive molecules and the enrichment was analyzed by means of eROCE. This retrospective study provides optimal similarity search parameters in the case of ALDH1A1 inhibitors. Copyright © 2013 Elsevier Ltd. All rights reserved.
Search of amino group in the Universe: 2-aminopyridine
Sharma M.K.
2015-01-01
Full Text Available In search for life in the Universe, scientists are interested in identification of molecules having amino (-NH2 group in the interstellar space. The aminoacetonitrile (NH2CH2CN, which is precursor of the simplest amino acid glycine (NH2CH2COOH, is identified near the galactic center. The 2-Aminopyridine (H2NC5H4N is of interest for scientists as it has a close association with life on the earth. Based on spectroscopic studies, we have calculated intensities of 2-Aminopyridine lines due to transitions between the rotational levels up to 47 cm−1 and have found a number of lines which may help in its identification in the interstellar medium. Frequencies of some of these transitions are found close to those detected in the envelope of IRC +10216 that are not assigned to any of the known species.
Portfolio Optimization for Multiple Group Credit Unions
Willis, John
1999-01-01
...) to diversify, credit unions now have the opportunity to market their services to specific employee groups or industries which can reduce the overall risk to the credit unions' health or solvency...
Optimal Control of Sensor Threshold for Autonomous Wide Area Search Munitions
Kish, Brian A; Jacques, David R; Pachter, Meir
2005-01-01
The optimal employment of autonomous wide area search munitions is addressed. The scenario considered involves an airborne munition searching a battle space for stationary targets in the presence of false targets...
Optimal correction and design parameter search by modern methods of rigorous global optimization
Makino, K.; Berz, M.
2011-01-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle
Optimizing Search and Ranking in Folksonomy Systems by Exploiting Context Information
Abel, Fabian; Henze, Nicola; Krause, Daniel
Tagging systems enable users to annotate resources with freely chosen keywords. The evolving bunch of tag assignments is called folksonomy and there exist already some approaches that exploit folksonomies to improve resource retrieval. In this paper, we analyze and compare graph-based ranking algorithms: FolkRank and SocialPageRank. We enhance these algorithms by exploiting the context of tags, and evaluate the results on the GroupMe! dataset. In GroupMe!, users can organize and maintain arbitrary Web resources in self-defined groups. When users annotate resources in GroupMe!, this can be interpreted in context of a certain group. The grouping activity itself is easy for users to perform. However, it delivers valuable semantic information about resources and their context. We present GRank that uses the context information to improve and optimize the detection of relevant search results, and compare different strategies for ranking result lists in folksonomy systems.
Optimization of renormalization group transformations in lattice gauge theory
Lang, C.B.; Salmhofer, M.
1988-01-01
We discuss the dependence of the renormalization group flow on the choice of the renormalization group transformation (RGT). An optimal choice of the transformation's parameters should lead to a renormalized trajectory close to a few-parameter action. We apply a recently developed method to determine an optimal RGT to SU(2) lattice gauge theory and discuss the achieved improvement. (orig.)
Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping
McDougall, Sine; Tyrer, Victoria; Folkard, Simon
2006-01-01
Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…
Taking It to the Top: A Lesson in Search Engine Optimization
Frydenberg, Mark; Miko, John S.
2011-01-01
Search engine optimization (SEO), the promoting of a Web site so it achieves optimal position with a search engine's rankings, is an important strategy for organizations and individuals in order to promote their brands online. Techniques for achieving SEO are relevant to students of marketing, computing, media arts, and other disciplines, and many…
Bramer, W. M.; Rethlefsen, Melissa L.; Kleijnen, Jos; Franco, Oscar H.
2017-01-01
Background: Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases
Search and optimization by metaheuristics techniques and algorithms inspired by nature
Du, Ke-Lin
2016-01-01
This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...
Piehowski, Paul D; Petyuk, Vladislav A; Sandoval, John D; Burnum, Kristin E; Kiebel, Gary R; Monroe, Matthew E; Anderson, Gordon A; Camp, David G; Smith, Richard D
2013-03-01
For bottom-up proteomics, there are wide variety of database-searching algorithms in use for matching peptide sequences to tandem MS spectra. Likewise, there are numerous strategies being employed to produce a confident list of peptide identifications from the different search algorithm outputs. Here we introduce a grid-search approach for determining optimal database filtering criteria in shotgun proteomics data analyses that is easily adaptable to any search. Systematic Trial and Error Parameter Selection--referred to as STEPS--utilizes user-defined parameter ranges to test a wide array of parameter combinations to arrive at an optimal "parameter set" for data filtering, thus maximizing confident identifications. The benefits of this approach in terms of numbers of true-positive identifications are demonstrated using datasets derived from immunoaffinity-depleted blood serum and a bacterial cell lysate, two common proteomics sample types. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Binary cuckoo search based optimal PMU placement scheme for ...
without including zero-injection effect, an Optimal PMU Placement strategy considering ..... in Indian power grid — A case study, Frontiers in Energy, Vol. ... optimization approach, Proceedings: International Conference on Intelligent Systems ...
R. A. Swief; T. S. Abdel-Salam; Noha H. El-Amary
2018-01-01
This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO) technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of ...
Improved quantum-behaved particle swarm optimization with local search strategy
Maolong Xi
2017-03-01
Full Text Available Quantum-behaved particle swarm optimization, which was motivated by analysis of particle swarm optimization and quantum system, has shown compared performance in finding the optimal solutions for many optimization problems to other evolutionary algorithms. To address the problem of premature, a local search strategy is proposed to improve the performance of quantum-behaved particle swarm optimization. In proposed local search strategy, a super particle is presented which is a collection body of randomly selected particles’ dimension information in the swarm. The selected probability of particles in swarm is different and determined by their fitness values. To minimization problems, the fitness value of one particle is smaller; the selected probability is more and will contribute more information in constructing the super particle. In addition, in order to investigate the influence on algorithm performance with different local search space, four methods of computing the local search radius are applied in local search strategy and propose four variants of local search quantum-behaved particle swarm optimization. Empirical studies on a suite of well-known benchmark functions are undertaken in order to make an overall performance comparison among the proposed methods and other quantum-behaved particle swarm optimization. The simulation results show that the proposed quantum-behaved particle swarm optimization variants have better advantages over the original quantum-behaved particle swarm optimization.
Energy group structure determination using particle swarm optimization
Yi, Ce; Sjoden, Glenn
2013-01-01
Highlights: ► Particle swarm optimization is applied to determine broad group structure. ► A graph representation of the broad group structure problem is introduced. ► The approach is tested on a fuel-pin model. - Abstract: Multi-group theory is widely applied for the energy domain discretization when solving the Linear Boltzmann Equation. To reduce the computational cost, fine group cross libraries are often down-sampled into broad group cross section libraries. Cross section data collapsing generally involves two steps: Firstly, the broad group structure has to be determined; secondly, a weighting scheme is used to evaluate the broad cross section library based on the fine group cross section data and the broad group structure. A common scheme is to average the fine group cross section weighted by the fine group flux. Cross section collapsing techniques have been intensively researched. However, most studies use a pre-determined group structure, open based on experience, to divide the neutron energy spectrum into thermal, epi-thermal, fast, etc. energy range. In this paper, a swarm intelligence algorithm, particle swarm optimization (PSO), is applied to optimize the broad group structure. A graph representation of the broad group structure determination problem is introduced. And the swarm intelligence algorithm is used to solve the graph model. The effectiveness of the approach is demonstrated using a fuel-pin model
Optimal search filters for renal information in EMBASE.
Iansavichus, Arthur V; Haynes, R Brian; Shariff, Salimah Z; Weir, Matthew; Wilczynski, Nancy L; McKibbon, Ann; Rehman, Faisal; Garg, Amit X
2010-07-01
EMBASE is a popular database used to retrieve biomedical information. Our objective was to develop and test search filters to help clinicians and researchers efficiently retrieve articles with renal information in EMBASE. We used a diagnostic test assessment framework because filters operate similarly to screening tests. We divided a sample of 5,302 articles from 39 journals into development and validation sets of articles. Information retrieval properties were assessed by treating each search filter as a "diagnostic test" or screening procedure for the detection of relevant articles. We tested the performance of 1,936,799 search filters made of unique renal terms and their combinations. REFERENCE STANDARD & OUTCOME: The reference standard was manual review of each article. We calculated the sensitivity and specificity of each filter to identify articles with renal information. The best renal filters consisted of multiple search terms, such as "renal replacement therapy," "renal," "kidney disease," and "proteinuria," and the truncated terms "kidney," "dialy," "neph," "glomerul," and "hemodial." These filters achieved peak sensitivities of 98.7% (95% CI, 97.9-99.6) and specificities of 98.5% (95% CI, 98.0-99.0). The retrieval performance of these filters remained excellent in the validation set of independent articles. The retrieval performance of any search will vary depending on the quality of all search concepts used, not just renal terms. We empirically developed and validated high-performance renal search filters for EMBASE. These filters can be programmed into the search engine or used on their own to improve the efficiency of searching.
Simulation to Support Local Search in Trajectory Optimization Planning
Morris, Robert A.; Venable, K. Brent; Lindsey, James
2012-01-01
NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.
A modified harmony search based method for optimal rural radial ...
International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 3 (2010) >. Log in or Register to get access to full text downloads.
Hailong Wang
2018-01-01
Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.
Ambush frequency should increase over time during optimal predator search for prey.
Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme
2011-11-07
We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game they are playing: the possibility of ambush decreases the propensity of the prey to frequently change locations and thereby renders it more susceptible to the systematic cruising search portion of the strategy. This heuristic explanation is supported by showing that in a new idealized search game where the predator is allowed to ambush or search at any time, and the prey can change locations at intermittent times, optimal predator play requires an alternation (or mixture) over time of ambush and cruise search. Thus, our game is an extension of the well-studied 'Princess and Monster' search game. Search games are zero sum games, where the pay-off is the capture time and neither the Searcher nor the Hider knows the location of the other. We are able to determine the optimal mixture of the search modes when the predator uses a mixture which is constant over time, and also to determine how the mode mixture changes over time when dynamic strategies are allowed (the ambush probability increases over time). In particular, we establish the 'square root law of search predation': the optimal proportion of active search equals the square root of the fraction of the region that has not yet been explored.
The topography of the environment alters the optimal search strategy for active particles
Volpe, Giorgio; Volpe, Giovanni
2017-10-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.
TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0
Charles-Victor Boutet
2012-12-01
Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.
ERRATUM: TOWARDS ACTIVE SEO (SEARCH ENGINE OPTIMIZATION 2.0
Charles-Victor Boutet
2013-04-01
Full Text Available In the age of writable web, new skills and new practices are appearing. In an environment that allows everyone to communicate information globally, internet referencing (or SEO is a strategic discipline that aims to generate visibility, internet traffic and a maximum exploitation of sites publications. Often misperceived as a fraud, SEO has evolved to be a facilitating tool for anyone who wishes to reference their website with search engines. In this article we show that it is possible to achieve the first rank in search results of keywords that are very competitive. We show methods that are quick, sustainable and legal; while applying the principles of active SEO 2.0. This article also clarifies some working functions of search engines, some advanced referencing techniques (that are completely ethical and legal and we lay the foundations for an in depth reflection on the qualities and advantages of these techniques.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Search Trees with Relaxed Balance and Near-Optimal Height
Fagerberg, Rolf; Jensen, Rune E.; Larsen, Kim Skak
2001-01-01
We introduce a relaxed k-tree, a search tree with relaxed balance and a height bound, when in balance, of (1+epsilon)log_2 n + 1, for any epsilon > 0. The number of nodes involved in rebalancing is O(1/epsilon) per update in the amortized sense, and O(log n/epsilon) in the worst case sense. This ...... constant rebalancing, which is an improvement over the current definition. World Wide Web search engines are possible applications for this line of work....
Optimal Finger Search Trees in the Pointer Machine
Brodal, Gerth Stølting; Lagogiannis, George; Makris, Christos
2003-01-01
We develop a new finger search tree with worst-case constant update time in the Pointer Machine (PM) model of computation. This was a major problem in the field of Data Structures and was tantalizingly open for over twenty years while many attempts by researchers were made to solve it. The result...
Optimization of Transformation Coefficients Using Direct Search and Swarm Intelligence
Manusov V.Z.
2017-04-01
Full Text Available This research considers optimization of tap position of transformers in power systems to reduce power losses. Now, methods based on heuristic rules and fuzzy logic, or methods that optimize parts of the whole system separately, are applied to this problem. The first approach requires expert knowledge about processes in the network. The second methods are not able to consider all the interrelations of system’s parts, while changes in segment affect the entire system. Both approaches are tough to implement and require adjustment to the tasks solved. It needs to implement algorithms that can take into account complex interrelations of optimized variables and self-adapt to optimization task. It is advisable to use algorithms given complex interrelations of optimized variables and independently adapting from optimization tasks. Such algorithms include Swarm Intelligence algorithms. Their main features are self-organization, which allows them to automatically adapt to conditions of tasks, and the ability to efficiently exit from local extremes. Thus, they do not require specialized knowledge of the system, in contrast to fuzzy logic. In addition, they can efficiently find quasi-optimal solutions converging to the global optimum. This research applies Particle Swarm Optimization algorithm (PSO. The model of Tajik power system used in experiments. It was found out that PSO is much more efficient than greedy heuristics and more flexible and easier to use than fuzzy logic. PSO allows reducing active power losses from 48.01 to 45.83 MW (4.5%. With al, the effect of using greedy heuristics or fuzzy logic is two times smaller (2.3%.
DETERMINATION OF BRAKING OPTIMAL MODE OF CONTROLLED CUT OF DESIGN GROUP
A. S. Dorosh
2015-06-01
Full Text Available Purpose. The application of automation systems of breaking up process on the gravity hump is the efficiency improvement of their operation, absolute provision of trains breaking up safety demands, as well as improvement of hump staff working conditions. One of the main tasks of the indicated systems is the assurance of cuts reliable separation at all elements of their rolling route to the classification track. This task is a sophisticated optimization problem and has not received a final decision. Therefore, the task of determining the cuts braking mode is quite relevant. The purpose of this research is to find the optimal braking mode of control cut of design group. Methodology. In order to achieve the purpose is offered to use the direct search methods in the work, namely the Box complex method. This method does not require smoothness of the objective function, takes into account its limitations and does not require calculation of the function derivatives, and uses only its value. Findings. Using the Box method was developed iterative procedure for determining the control cut optimal braking mode of design group. The procedure maximizes the smallest controlled time interval in the group. To evaluate the effectiveness of designed procedure the series of simulation experiments of determining the control cut braking mode of design group was performed. The results confirmed the efficiency of the developed optimization procedure. Originality. The author formalized the task of optimizing control cut braking mode of design group, taking into account the cuts separation of design group at all elements (switches, retarders during cuts rolling to the classification track. The problem of determining the optimal control cut braking mode of design group was solved. The developed braking mode ensures cuts reliable separation of the group not only at the switches but at the retarders of brake position. Practical value. The developed procedure can be
Search Engine Optimization for Flash Best Practices for Using Flash on the Web
Perkins, Todd
2009-01-01
Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.
Optimization of boiling water reactor control rod patterns using linear search
Kiguchi, T.; Doi, K.; Fikuzaki, T.; Frogner, B.; Lin, C.; Long, A.B.
1984-01-01
A computer program for searching the optimal control rod pattern has been developed. The program is able to find a control rod pattern where the resulting power distribution is optimal in the sense that it is the closest to the desired power distribution, and it satisfies all operational constraints. The search procedure consists of iterative uses of two steps: sensitivity analyses of local power and thermal margins using a three-dimensional reactor simulator for a simplified prediction model; linear search for the optimal control rod pattern with the simplified model. The optimal control rod pattern is found along the direction where the performance index gradient is the steepest. This program has been verified to find the optimal control rod pattern through simulations using operational data from the Oyster Creek Reactor
Optimal swimming strategies in mate searching pelagic copepods
Kiørboe, Thomas
2008-01-01
Male copepods must swim to find females, but swimming increases the risk of meeting predators and is expensive in terms of energy expenditure. Here I address the trade-offs between gains and risks and the question of how much and how fast to swim using simple models that optimise the number...... of lifetime mate encounters. Radically different swimming strategies are predicted for different feeding behaviours, and these predictions are tested experimentally using representative species. In general, male swimming speeds and the difference in swimming speeds between the genders are predicted...... and observed to increase with increasing conflict between mate searching and feeding. It is high in ambush feeders, where searching (swimming) and feeding are mutually exclusive and low in species, where the matured males do not feed at all. Ambush feeding males alternate between stationary ambush feeding...
Searching for Intertextual Connections in Small Group Text Discussion
Chi, Feng-ming
2012-01-01
This paper reports the sources for and intentions of intertextuality made by 10 groups of Taiwanese university students in the process of discussing two American stories. Two types of data, small group text discussions and oral interviews, were gathered. The results indicated that participants used diverse sources of intertextual links, and with…
Search Frictions, Job Flows and Optimal Monetary Policy
Shoujian Zhang
2014-01-01
Job creation and job destruction are investigated in an economy featured by search frictions in both labour and goods markets. We show that both the unemployment rate and the endogenous job destruction rate increase when the inflation rate rises, because the demand declines due to the increase in the cost of holding money. Our numerical exercises suggest that the destruction of lower productivity jobs and the creation of higher productivity jobs may be inefficiently low under the zero nominal...
Group Elevator Peak Scheduling Based on Robust Optimization Model
ZHANG, J.
2013-08-01
Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
Proximity search heuristics for wind farm optimal layout
Fischetti, Martina; Monaci, Michele
2016-01-01
A heuristic framework for turbine layout optimization in a wind farm is proposed that combines ad-hoc heuristics and mixed-integer linear programming. In our framework, large-scale mixed-integer programming models are used to iteratively refine the current best solution according to the recently...
The Optimal Taxation of UnskilIed Labor with Job Search and Social Assistance
Boone, J.; Bovenberg, A.L.
2002-01-01
In order to explore the optimal taxation of low-skilled labor, we extend the standard model of optimal non-linear income taxation in the presence of quasi-linear preferences in leisure by allowing for involuntary unemployment, job search, an exogenous welfare benefit, and a non-utilitarian social
Rony Baskoro Lukito
2014-12-01
Full Text Available The purpose of this research is how to optimize a web design that can increase the number of visitors. The number of Internet users in the world continues to grow in line with advances in information technology. Products and services marketing media do not just use the printed and electronic media. Moreover, the cost of using the Internet as a medium of marketing is relatively inexpensive when compared to the use of television as a marketing medium. The penetration of the internet as a marketing medium lasted for 24 hours in different parts of the world. But to make an internet site into a site that is visited by many internet users, the site is not only good from the outside view only. Web sites that serve as a medium for marketing must be built with the correct rules, so that the Web site be optimal marketing media. One of the good rules in building the internet site as a marketing medium is how the content of such web sites indexed well in search engines like google. Search engine optimization in the index will be focused on the search engine Google for 83% of internet users across the world using Google as a search engine. Search engine optimization commonly known as SEO (Search Engine Optimization is an important rule that the internet site is easier to find a user with the desired keywords.
Optimization of Particle Search Algorithm for CFD-DEM Simulations
G. Baryshev
2013-09-01
Full Text Available Discrete element method has numerous applications in particle physics. However, simulating particles as discrete entities can become costly for large systems. In time-driven DEM simulation most computation time is taken by contact search stage. We propose an efficient collision detection method which is based on sorting particles by their coordinates. Using multiple sorting criteria allows minimizing number of potential neighbours and defines fitness of this approach for simulation of massive systems in 3D. This method is compared to a common approach that consists of placing particles onto a grid of cells. Advantage of the new approach is independence of simulation parameters upon particle radius and domain size.
Search algorithms as a framework for the optimization of drug combinations.
Diego Calzolari
2008-12-01
Full Text Available Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms -- originally developed for digital communication -- modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6-9 interventions in 80-90% of tests, compared with 15-30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution.
Optimizing urology group partnerships: collaboration strategies and compensation best practices.
Jacoby, Dana L; Maller, Bruce S; Peltier, Lisa R
2014-10-01
Market forces in health care have created substantial regulatory, legislative, and reimbursement changes that have had a significant impact on urology group practices. To maintain viability, many urology groups have merged into larger integrated entities. Although group operations vary considerably, the majority of groups have struggled with the development of a strong culture, effective decision-making, and consensus-building around shared resources, income, and expense. Creating a sustainable business model requires urology group leaders to allocate appropriate time and resources to address these issues in a proactive manner. This article outlines collaboration strategies for creating an effective culture, governance, and leadership, and provides practical suggestions for optimizing the performance of the urology group practice.
Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach
Dongshuang Li
2018-05-01
Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.
Rocha, Humberto; Dias, Joana M; Ferreira, Brígida C; Lopes, Maria C
2013-01-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem. (paper)
Tools for searching resonant moving groups in Galactic disc simulations
Roca, S.; Romero-Gomez, M.; Antoja Castelltort, Teresa; Valenzuela, O.; Figuera, F.; Monguio, M.
One of the most plausible explanations for the origin of the moving groups is the orbital and resonant regions related to the large scale structure (bar and spiral arms) of the Milky Way (Antoja 2010). This study has been up to now restricted to the solar radius. Here we propose to investigate the
Multiobjective Optimization of Water Distribution Networks Using Fuzzy Theory and Harmony Search
Zong Woo Geem
2015-07-01
Full Text Available Thus far, various phenomenon-mimicking algorithms, such as genetic algorithm, simulated annealing, tabu search, shuffled frog-leaping, ant colony optimization, harmony search, cross entropy, scatter search, and honey-bee mating, have been proposed to optimally design the water distribution networks with respect to design cost. However, flow velocity constraint, which is critical for structural robustness against water hammer or flow circulation against substance sedimentation, was seldom considered in the optimization formulation because of computational complexity. Thus, this study proposes a novel fuzzy-based velocity reliability index, which is to be maximized while the design cost is simultaneously minimized. The velocity reliability index is included in the existing cost optimization formulation and this extended multiobjective formulation is applied to two bench-mark problems. Results show that the model successfully found a Pareto set of multiobjective design solutions in terms of cost minimization and reliability maximization.
Savsani, Vimal; Patel, Vivek; Gadhvi, Bhargav; Tawhid, Mohamed
2017-01-01
Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS) algorithm, which is based on the search technique of heat transfer search (HTS) algorithm. MOHTS employs the elitist nondominated sorting and crowding dis...
A Harmony Search Algorithm approach for optimizing traffic signal timings
Mauro Dell'Orco
2013-07-01
Full Text Available In this study, a bi-level formulation is presented for solving the Equilibrium Network Design Problem (ENDP. The optimisation of the signal timing has been carried out at the upper-level using the Harmony Search Algorithm (HSA, whilst the traffic assignment has been carried out through the Path Flow Estimator (PFE at the lower level. The results of HSA have been first compared with those obtained using the Genetic Algorithm, and the Hill Climbing on a two-junction network for a fixed set of link flows. Secondly, the HSA with PFE has been applied to the medium-sized network to show the applicability of the proposed algorithm in solving the ENDP. Additionally, in order to test the sensitivity of perceived travel time error, we have used the HSA with PFE with various level of perceived travel time. The results showed that the proposed method is quite simple and efficient in solving the ENDP.
Optimization of search algorithms for a mass spectra library
Domokos, L.; Henneberg, D.; Weimann, B.
1983-01-01
The SISCOM mass spectra library search is mainly an interpretative system producing a ''hit list'' of similar spectra based on six comparison factors. This paper deals with extension of the system; the aim is exact identification (retrieval) of those reference spectra in the SISCOM hit list that correspond to the unknown compounds or components of the mixture. Thus, instead of a similarity measure, a decision (retrieval) function is needed to establish the identity of reference and unknown compounds by comparison of their spectra. To facilitate estimation of the weightings of the different variables in the retrieval function, pattern recognition algorithms were applied. Numerous statistical evaluations of three different library collections were made to check the quality of data bases and to derive appropriate variables for the retrieval function. (Auth.)
A Generalized Orienteering Problem for Optimal Search and Interdiction Planning
2013-09-01
proposed for the TOP. Boussier et al. (2007) presents a branch-and- price algorithm that relies on a pricing step within the column generation phase...dominates in all metric categories and B&B appears to be the least favorable. We use performance proles ( Dolan and Moré 2002) as a method for comparing...exceeded, with greater computing power it may be possible to obtain the optimal solution in a period of time that can support a 24-hour planning
Joint optimization of production scheduling and machine group preventive maintenance
Xiao, Lei; Song, Sanling; Chen, Xiaohui; Coit, David W.
2016-01-01
Joint optimization models were developed combining group preventive maintenance of a series system and production scheduling. In this paper, we propose a joint optimization model to minimize the total cost including production cost, preventive maintenance cost, minimal repair cost for unexpected failures and tardiness cost. The total cost depends on both the production process and the machine maintenance plan associated with reliability. For the problems addressed in this research, any machine unavailability leads to system downtime. Therefore, it is important to optimize the preventive maintenance of machines because their performance impacts the collective production processing associated with all machines. Too lengthy preventive maintenance intervals may be associated with low scheduled machine maintenance cost, but may incur expensive costs for unplanned failure due to low machine reliability. Alternatively, too frequent preventive maintenance activities may achieve the desired high reliability machines, but unacceptably high scheduled maintenance cost. Additionally, product scheduling plans affect tardiness and maintenance cost. Two results are obtained when solving the problem; the optimal group preventive maintenance interval for machines, and the assignment of each job, including the corresponding start time and completion time. To solve this non-deterministic polynomial-time problem, random keys genetic algorithms are used, and a numerical example is solved to illustrate the proposed model. - Highlights: • Group preventive maintenance (PM) planning and production scheduling are jointed. • Maintenance interval and assignment of jobs are decided by minimizing total cost. • Relationships among system age, PM, job processing time are quantified. • Random keys genetic algorithms (GA) are used to solve the NP-hard problem. • Random keys GA and Particle Swarm Optimization (PSO) are compared.
Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications
Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi
2002-04-01
The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.
Optimal information transmission in organizations: search and congestion
Arenas, A.; Cabrales, A.; Danon, L.; Diaz-Guilera, A.; Guimera, R.; Vega-Redondo, F.
2008-01-01
We propose a stylized model of a problem-solving organization whose internal communication structure is given by a fixed network. Problems arrive randomly anywhere in this network and must find their way to their respective specialized solvers by relying on local information alone. The organization handles multiple problems simultaneously. For this reason, the process may be subject to congestion. We provide a characterization of the threshold of collapse of the network and of the stock of floating problems (or average delay) that prevails below that threshold. We build upon this characterization to address a design problem: the determination of what kind of network architecture optimizes performance for any given problem arrival rate. We conclude that, for low arrival rates, the optimal network is very polarized (i.e. star-like or centralized), whereas it is largely homogeneous (or decentralized) for high arrival rates. These observations are in line with a common transformation experienced by information-intensive organizations as their work flow has risen in recent years.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Spatial planning via extremal optimization enhanced by cell-based local search
Sidiropoulos, Epaminondas
2014-01-01
A new treatment is presented for land use planning problems by means of extremal optimization in conjunction to cell-based neighborhood local search. Extremal optimization, inspired by self-organized critical models of evolution has been applied mainly to the solution of classical combinatorial optimization problems. Cell-based local search has been employed by the author elsewhere in problems of spatial resource allocation in combination with genetic algorithms and simulated annealing. In this paper it complements extremal optimization in order to enhance its capacity for a spatial optimization problem. The hybrid method thus formed is compared to methods of the literature on a specific characteristic problem. It yields better results both in terms of objective function values and in terms of compactness. The latter is an important quantity for spatial planning. The present treatment yields significant compactness values as emergent results
A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision
loubna benchikhi
2017-10-01
Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.
Structural optimization of a motorcycle chassis by pattern search algorithm
Scappaticci, Lorenzo; Bartolini, Nicola; Guglielmino, Eugenio; Risitano, Giacomo
2017-08-01
Changes to the technical regulations of the motorcycle racing world classes introduced the new Moto2 category. The vehicles are prototypes that use single-brand tyres and engines derived from series production, supplied by a single manufacturer. The stability and handling of the vehicle are highly dependent on the geometric properties of the chassis. The performance of a racing motorcycle chassis can be primarily evaluated in terms of weight and stiffness. The aim of this work is to maximize the performance of a tubular frame designed for a motorcycle racing in the Moto2 category. The goal is the implementation of an optimization algorithm that acts on the dimensions of the single pipes of the frame and involves the design of an objective function to minimize the weight of the frame by controlling its stiffnesses.
Yadav, Parikshit; Kumar, Rajesh; Panda, S.K.; Chang, C.S.
2011-01-01
Harmony Search (HS) algorithm is music based meta-heuristic optimization method which is analogous with the music improvisation process where musician continue to polish the pitches in order to obtain better harmony. The paper focuses on the optimal scheduling of the generators to reduce the fuel consumption in the oil rig platform. The accurate modeling of the specific fuel consumption is significant in this optimization. The specific fuel consumption has been modeled using cubic spline interpolation. The SFC curve is non-linear and discrete in nature, hence conventional methods fail to give optimal solution. HS algorithm has been used for optimal scheduling of the generators of both equal and unequal rating. Furthermore an Improved Harmony Search (IHS) method for generating new solution vectors that enhances accuracy and convergence rate of HS has been employed. The paper also focuses on the impacts of constant parameters on Harmony Search algorithm. Numerical results show that the IHS method has good convergence property. Moreover, the fuel consumption for IHS algorithm is lower when compared to HS and other heuristic or deterministic methods and is a powerful search algorithm for various engineering optimization problems.
Yadav, Parikshit; Kumar, Rajesh; Panda, S.K.; Chang, C.S. [Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (Singapore)
2011-02-15
Harmony Search (HS) algorithm is music based meta-heuristic optimization method which is analogous with the music improvisation process where musician continue to polish the pitches in order to obtain better harmony. The paper focuses on the optimal scheduling of the generators to reduce the fuel consumption in the oil rig platform. The accurate modeling of the specific fuel consumption is significant in this optimization. The specific fuel consumption has been modeled using cubic spline interpolation. The SFC curve is non-linear and discrete in nature, hence conventional methods fail to give optimal solution. HS algorithm has been used for optimal scheduling of the generators of both equal and unequal rating. Furthermore an Improved Harmony Search (IHS) method for generating new solution vectors that enhances accuracy and convergence rate of HS has been employed. The paper also focuses on the impacts of constant parameters on Harmony Search algorithm. Numerical results show that the IHS method has good convergence property. Moreover, the fuel consumption for IHS algorithm is lower when compared to HS and other heuristic or deterministic methods and is a powerful search algorithm for various engineering optimization problems. (author)
Particle Dark Matter Searches Outside the Local Group
Regis, Marco; Xia, Jun-Qing; Cuoco, Alessandro; Branchini, Enzo; Fornengo, Nicolao; Viel, Matteo
2015-06-01
If dark matter (DM) is composed by particles which are nongravitationally coupled to ordinary matter, their annihilations or decays in cosmic structures can result in detectable radiation. We show that the most powerful technique to detect a particle DM signal outside the Local Group is to study the angular cross-correlation of nongravitational signals with low-redshift gravitational probes. This method allows us to enhance the signal to noise from the regions of the Universe where the DM-induced emission is preferentially generated. We demonstrate the power of this approach by focusing on GeV-TeV DM and on the recent cross-correlation analysis between the 2MASS galaxy catalogue and the Fermi-LAT γ -ray maps. We show that this technique is more sensitive than other extragalactic γ -ray probes, such as the energy spectrum and angular autocorrelation of the extragalactic background, and emission from clusters of galaxies. Intriguingly, we find that the measured cross-correlation can be well fitted by a DM component, with a thermal annihilation cross section and mass between 10 and 100 GeV, depending on the small-scale DM properties and γ -ray production mechanism. This solicits further data collection and dedicated analyses.
E.A.J. van Hooft (Edwin); M.Ph. Born (Marise); T.W. Taris (Toon); H. van der Flier (Henk)
2003-01-01
textabstractThe labor market in many Western countries increasingly diversifies. However, little is known about job search behavior of 'non-traditional' applicants such as ethnic minorities. This study investigated minority – majority group differences in the predictors of job search behavior, using
Mangal Singh
2017-12-01
Full Text Available This paper considers the use of the Partial Transmit Sequence (PTS technique to reduce the Peak‐to‐Average Power Ratio (PAPR of an Orthogonal Frequency Division Multiplexing signal in wireless communication systems. Search complexity is very high in the traditional PTS scheme because it involves an extensive random search over all combinations of allowed phase vectors, and it increases exponentially with the number of phase vectors. In this paper, a suboptimal metaheuristic algorithm for phase optimization based on an improved harmony search (IHS is applied to explore the optimal combination of phase vectors that provides improved performance compared with existing evolutionary algorithms such as the harmony search algorithm and firefly algorithm. IHS enhances the accuracy and convergence rate of the conventional algorithms with very few parameters to adjust. Simulation results show that an improved harmony search‐based PTS algorithm can achieve a significant reduction in PAPR using a simple network structure compared with conventional algorithms.
Research on the optimization strategy of web search engine based on data mining
Chen, Ronghua
2018-04-01
With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.
Dual-mode nested search method for categorical uncertain multi-objective optimization
Tang, Long; Wang, Hu
2016-10-01
Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.
An Optimization Model and Modified Harmony Search Algorithm for Microgrid Planning with ESS
Yang Jiao
2017-01-01
Full Text Available To solve problems such as the high cost of microgrids (MGs, balance between supply and demand, stability of system operation, and optimizing the MG planning model, the energy storage system (ESS and harmony search algorithm (HSA are proposed. First, the conventional MG planning optimization model is constructed and the constraint conditions are defined: the supply and demand balance and reserve requirements. Second, an ESS is integrated into the optimal model of MG planning. The model with an ESS can solve and identify parameters such as the optimal power, optimal capacity, and optimal installation year. Third, the convergence speed and robustness of the ESS are optimized and improved. A case study comprising three different cases concludes the paper. The results show that the modified HSA (MHSA can effectively improve the stability and economy of MG operation with an ESS.
An extension of the directed search domain algorithm to bilevel optimization
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
New Search Space Reduction Algorithm for Vertical Reference Trajectory Optimization
Alejandro MURRIETA-MENDOZA
2016-06-01
Full Text Available Burning the fuel required to sustain a given flight releases pollution such as carbon dioxide and nitrogen oxides, and the amount of fuel consumed is also a significant expense for airlines. It is desirable to reduce fuel consumption to reduce both pollution and flight costs. To increase fuel savings in a given flight, one option is to compute the most economical vertical reference trajectory (or flight plan. A deterministic algorithm was developed using a numerical aircraft performance model to determine the most economical vertical flight profile considering take-off weight, flight distance, step climb and weather conditions. This algorithm is based on linear interpolations of the performance model using the Lagrange interpolation method. The algorithm downloads the latest available forecast from Environment Canada according to the departure date and flight coordinates, and calculates the optimal trajectory taking into account the effects of wind and temperature. Techniques to avoid unnecessary calculations are implemented to reduce the computation time. The costs of the reference trajectories proposed by the algorithm are compared with the costs of the reference trajectories proposed by a commercial flight management system using the fuel consumption estimated by the FlightSim® simulator made by Presagis®.
Nowcasting Unemployment Rates with Google Searches: Evidence from the Visegrad Group Countries
Pavlicek, Jaroslav; Kristoufek, Ladislav
2015-01-01
The online activity of Internet users has repeatedly been shown to provide a rich information set for various research fields. We focus on job-related searches on Google and their possible usefulness in the region of the Visegrad Group - the Czech Republic, Hungary, Poland and Slovakia. Even for rather small economies, the online searches of inhabitants can be successfully utilized for macroeconomic predictions. Specifically, we study unemployment rates and their interconnection with job-related searches. We show that Google searches enhance nowcasting models of unemployment rates for the Czech Republic and Hungary whereas for Poland and Slovakia, the results are mixed. PMID:26001083
Radiotherapy Planning Using an Improved Search Strategy in Particle Swarm Optimization.
Modiri, Arezoo; Gu, Xuejun; Hagan, Aaron M; Sawant, Amit
2017-05-01
Evolutionary stochastic global optimization algorithms are widely used in large-scale, nonconvex problems. However, enhancing the search efficiency and repeatability of these techniques often requires well-customized approaches. This study investigates one such approach. We use particle swarm optimization (PSO) algorithm to solve a 4D radiation therapy (RT) inverse planning problem, where the key idea is to use respiratory motion as an additional degree of freedom in lung cancer RT. The primary goal is to administer a lethal dose to the tumor target while sparing surrounding healthy tissue. Our optimization iteratively adjusts radiation fluence-weights for all beam apertures across all respiratory phases. We implement three PSO-based approaches: conventionally used unconstrained, hard-constrained, and our proposed virtual search. As proof of concept, five lung cancer patient cases are optimized over ten runs using each PSO approach. For comparison, a dynamically penalized likelihood (DPL) algorithm-a popular RT optimization technique is also implemented and used. The proposed technique significantly improves the robustness to random initialization while requiring fewer iteration cycles to converge across all cases. DPL manages to find the global optimum in 2 out of 5 RT cases over significantly more iterations. The proposed virtual search approach boosts the swarm search efficiency, and consequently, improves the optimization convergence rate and robustness for PSO. RT planning is a large-scale, nonconvex optimization problem, where finding optimal solutions in a clinically practical time is critical. Our proposed approach can potentially improve the optimization efficiency in similar time-sensitive problems.
A penalty guided stochastic fractal search approach for system reliability optimization
Mellal, Mohamed Arezki; Zio, Enrico
2016-01-01
Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.
PcapDB: Search Optimized Packet Capture, Version 0.1.0.0
2016-11-04
PcapDB is a packet capture system designed to optimize the captured data for fast search in the typical (network incident response) use case. The technology involved in this software has been submitted via the IDEAS system and has been filed as a provisional patent. It includes the following primary components: capture: The capture component utilizes existing capture libraries to retrieve packets from network interfaces. Once retrieved the packets are passed to additional threads for sorting into flows and indexing. The sorted flows and indexes are passed to other threads so that they can be written to disk. These components are written in the C programming language. search: The search components provide a means to find relevant flows and the associated packets. A search query is parsed and represented as a search tree. Various search commands, written in C, are then used resolve this tree into a set of search results. The tree generation and search execution management components are written in python. interface: The PcapDB web interface is written in Python on the Django framework. It provides a series of pages, API's, and asynchronous tasks that allow the user to manage the capture system, perform searches, and retrieve results. Web page components are written in HTML,CSS and Javascript.
Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.
Arendt, Florian; Scherr, Sebastian
2017-11-01
Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.
Weitian Lin
2014-01-01
Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Multi-objective group scheduling optimization integrated with preventive maintenance
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Signatures of active and passive optimized Lévy searching in jellyfish.
Reynolds, Andy M
2014-10-06
Some of the strongest empirical support for Lévy search theory has come from telemetry data for the dive patterns of marine predators (sharks, bony fishes, sea turtles and penguins). The dive patterns of the unusually large jellyfish Rhizostoma octopus do, however, sit outside of current Lévy search theory which predicts that a single search strategy is optimal. When searching the water column, the movement patterns of these jellyfish change over time. Movement bouts can be approximated by a variety of Lévy and Brownian (exponential) walks. The adaptive value of this variation is not known. On some occasions movement pattern data are consistent with the jellyfish prospecting away from a preferred depth, not finding an improvement in conditions elsewhere and so returning to their original depth. This 'bounce' behaviour also sits outside of current Lévy walk search theory. Here, it is shown that the jellyfish movement patterns are consistent with their using optimized 'fast simulated annealing'--a novel kind of Lévy walk search pattern--to locate the maximum prey concentration in the water column and/or to locate the strongest of many olfactory trails emanating from more distant prey. Fast simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space. This new finding shows that the notion of active optimized Lévy walk searching is not limited to the search for randomly and sparsely distributed resources, as previously thought, but can be extended to embrace other scenarios, including that of the jellyfish R. octopus. In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents. Here, it is shown that these passive movements can be represented accurately by Lévy walks of the type occasionally seen in R. octopus. This result vividly illustrates that Lévy walks are not necessarily
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation
Witt, Carsten
2012-01-01
The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1...
In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data
Yi, Yeon-Sook
2017-01-01
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…
Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array
Lihua Wang
2014-01-01
Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.
Vikram Kumar Kamboj
2016-04-01
Full Text Available In recent years, global warming and carbon dioxide (CO2 emission reduction have become important issues in India, as CO2 emission levels are continuing to rise in accordance with the increased volume of Indian national energy consumption under the pressure of global warming, it is crucial for Indian government to impose the effective policy to promote CO2 emission reduction. Challenge of supplying the nation with high quality and reliable electrical energy at a reasonable cost, converted government policy into deregulation and restructuring environment. This research paper presents aims to presents an effective solution for energy and environmental problems of electric power using an efficient and powerful hybrid optimization algorithm: Hybrid Harmony search-random search algorithm. The proposed algorithm is tested for standard IEEE-14 bus, -30 bus and -56 bus system. The effectiveness of proposed hybrid algorithm is compared with others well known evolutionary, heuristics and meta-heuristics search algorithms. For multi-objective unit commitment, it is found that as there are conflicting relationship between cost and emission, if the performance in cost criterion is improved, performance in the emission is seen to deteriorate.
Renaut, R.; He, Q. [Arizona State Univ., Tempe, AZ (United States)
1994-12-31
In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.
Thi Rein Myo
2008-11-01
Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.
Veena Anand
2017-01-01
Full Text Available Wireless Sensor Networks (WSN has the disadvantage of limited and non-rechargeable energy resource in WSN creates a challenge and led to development of various clustering and routing algorithms. The paper proposes an approach for improving network lifetime by using Particle swarm optimization based clustering and Harmony Search based routing in WSN. So in this paper, global optimal cluster head are selected and Gateway nodes are introduced to decrease the energy consumption of the CH while sending aggregated data to the Base station (BS. Next, the harmony search algorithm based Local Search strategy finds best routing path for gateway nodes to the Base Station. Finally, the proposed algorithm is presented.
Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning
Lejla Banjanovic-Mehmedovic
2011-01-01
Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.
A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.
Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei
2017-10-01
The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.
Genetic local search algorithm for optimization design of diffractive optical elements.
Zhou, G; Chen, Y; Wang, Z; Song, H
1999-07-10
We propose a genetic local search algorithm (GLSA) for the optimization design of diffractive optical elements (DOE's). This hybrid algorithm incorporates advantages of both genetic algorithm (GA) and local search techniques. It appears better able to locate the global minimum compared with a canonical GA. Sample cases investigated here include the optimization design of binary-phase Dammann gratings, continuous surface-relief grating array generators, and a uniform top-hat focal plane intensity profile generator. Two GLSA's whose incorporated local search techniques are the hill-climbing method and the simulated annealing algorithm are investigated. Numerical experimental results demonstrate that the proposed algorithm is highly efficient and robust. DOE's that have high diffraction efficiency and excellent uniformity can be achieved by use of the algorithm we propose.
Optimization of travel salesman problem using the ant colony system and Greedy search
Esquivel E, J.; Ordonez A, A.; Ortiz S, J. J.
2008-01-01
In this paper we present some results obtained during the development of optimization systems that can be used to design refueling and patterns of control rods in a BWR. These systems use ant colonies and Greedy search. The first phase of this project is to be familiar with these optimization techniques applied to the problem of travel salesman problem (TSP). The utility of TSP study is that, like the refueling design and pattern design of control rods are problems of combinative optimization. Even, the similarity with the problem of the refueling design is remarkable. It is presented some results for the TSP with the 32 state capitals of Mexico country. (Author)
Stochastic optimal foraging: tuning intensive and extensive dynamics in random searches.
Frederic Bartumeus
Full Text Available Recent theoretical developments had laid down the proper mathematical means to understand how the structural complexity of search patterns may improve foraging efficiency. Under information-deprived scenarios and specific landscape configurations, Lévy walks and flights are known to lead to high search efficiencies. Based on a one-dimensional comparative analysis we show a mechanism by which, at random, a searcher can optimize the encounter with close and distant targets. The mechanism consists of combining an optimal diffusivity (optimally enhanced diffusion with a minimal diffusion constant. In such a way the search dynamics adequately balances the tension between finding close and distant targets, while, at the same time, shifts the optimal balance towards relatively larger close-to-distant target encounter ratios. We find that introducing a multiscale set of reorientations ensures both a thorough local space exploration without oversampling and a fast spreading dynamics at the large scale. Lévy reorientation patterns account for these properties but other reorientation strategies providing similar statistical signatures can mimic or achieve comparable efficiencies. Hence, the present work unveils general mechanisms underlying efficient random search, beyond the Lévy model. Our results suggest that animals could tune key statistical movement properties (e.g. enhanced diffusivity, minimal diffusion constant to cope with the very general problem of balancing out intensive and extensive random searching. We believe that theoretical developments to mechanistically understand stochastic search strategies, such as the one here proposed, are crucial to develop an empirically verifiable and comprehensive animal foraging theory.
Alejandro MURRIETA-MENDOZA
2017-08-01
Full Text Available With the objective of reducing the flight cost and the amount of polluting emissions released in the atmosphere, a new optimization algorithm considering the climb, cruise and descent phases is presented for the reference vertical flight trajectory. The selection of the reference vertical navigation speeds and altitudes was solved as a discrete combinatory problem by means of a graph-tree passing through nodes using the beam search optimization technique. To achieve a compromise between the execution time and the algorithm’s ability to find the global optimal solution, a heuristic methodology introducing a parameter called “optimism coefficient was used in order to estimate the trajectory’s flight cost at every node. The optimal trajectory cost obtained with the developed algorithm was compared with the cost of the optimal trajectory provided by a commercial flight management system(FMS. The global optimal solution was validated against an exhaustive search algorithm(ESA, other than the proposed algorithm. The developed algorithm takes into account weather effects, step climbs during cruise and air traffic management constraints such as constant altitude segments, constant cruise Mach, and a pre-defined reference lateral navigation route. The aircraft fuel burn was computed using a numerical performance model which was created and validated using flight test experimental data.
Chao-Chih Lin
2017-10-01
Full Text Available A new transient-based hybrid heuristic approach is developed to optimize a transient generation process and to detect leaks in pipe networks. The approach couples the ordinal optimization approach (OOA and the symbiotic organism search (SOS to solve the optimization problem by means of iterations. A pipe network analysis model (PNSOS is first used to determine steady-state head distribution and pipe flow rates. The best transient generation point and its relevant valve operation parameters are optimized by maximizing the objective function of transient energy. The transient event is created at the chosen point, and the method of characteristics (MOC is used to analyze the transient flow. The OOA is applied to sift through the candidate pipes and the initial organisms with leak information. The SOS is employed to determine the leaks by minimizing the sum of differences between simulated and computed head at the observation points. Two synthetic leaking scenarios, a simple pipe network and a water distribution network (WDN, are chosen to test the performance of leak detection ordinal symbiotic organism search (LDOSOS. Leak information can be accurately identified by the proposed approach for both of the scenarios. The presented technique makes a remarkable contribution to the success of leak detection in the pipe networks.
Combining of Direct Search and Signal-to-Noise Ratio for economic dispatch optimization
Lin, Whei-Min; Gow, Hong-Jey; Tsai, Ming-Tang
2011-01-01
This paper integrated the ideas of Direct Search and Signal-to-Noise Ratio (SNR) to develop a Novel Direct Search (NDS) method for solving the non-convex economic dispatch problems. NDS consists of three stages: Direct Search (DS), Global SNR (GSNR) and Marginal Compensation (MC) stages. DS provides a basic solution. GSNR searches the point with optimization strategy. MC fulfills the power balance requirement. With NDS, the infinite solution space becomes finite. Furthermore, a same optimum solution can be repeatedly reached. Effectiveness of NDS is demonstrated with three examples and the solutions were compared with previously published results. Test results show that the proposed method is simple, robust, and more effective than many other previously developed algorithms.
Y. Gholipour
Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.
Solving the wind farm layout optimization problem using random search algorithm
Feng, Ju; Shen, Wen Zhong
2015-01-01
, in which better results than the genetic algorithm (GA) and the old version of the RS algorithm are obtained. Second it is applied to the Horns Rev 1 WF, and the optimized layouts obtain a higher power production than its original layout, both for the real scenario and for two constructed scenarios......Wind farm (WF) layout optimization is to find the optimal positions of wind turbines (WTs) inside a WF, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. In this work, a random search (RS) algorithm based on continuous formulation....... In this application, it is also found that in order to get consistent and reliable optimization results, up to 360 or more sectors for wind direction have to be used. Finally, considering the inevitable inter-annual variations in the wind conditions, the robustness of the optimized layouts against wind condition...
Need for Cognition and Active Information Search in Small Student Groups
Curseu, Petru Lucian
2011-01-01
In a sample of 213 students organized in 44 groups this study tests the impact of need for cognition on active information search by using a multilevel analysis. The results show that group members with high need for cognition seek more advice in task related issues than those with low need for cognition and this pattern of information exchange is…
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
Debkalpa Goswami
2015-03-01
Full Text Available Ultrasonic machining (USM is a mechanical material removal process used to erode holes and cavities in hard or brittle workpieces by using shaped tools, high-frequency mechanical motion and an abrasive slurry. Unlike other non-traditional machining processes, such as laser beam and electrical discharge machining, USM process does not thermally damage the workpiece or introduce significant levels of residual stress, which is important for survival of materials in service. For having enhanced machining performance and better machined job characteristics, it is often required to determine the optimal control parameter settings of an USM process. The earlier mathematical approaches for parametric optimization of USM processes have mostly yielded near optimal or sub-optimal solutions. In this paper, two almost unexplored non-conventional optimization techniques, i.e. gravitational search algorithm (GSA and fireworks algorithm (FWA are applied for parametric optimization of USM processes. The optimization performance of these two algorithms is compared with that of other popular population-based algorithms, and the effects of their algorithm parameters on the derived optimal solutions and computational speed are also investigated. It is observed that FWA provides the best optimal results for the considered USM processes.
Optimal IIR filter design using Gravitational Search Algorithm with Wavelet Mutation
S.K. Saha
2015-01-01
Full Text Available This paper presents a global heuristic search optimization technique, which is a hybridized version of the Gravitational Search Algorithm (GSA and Wavelet Mutation (WM strategy. Thus, the Gravitational Search Algorithm with Wavelet Mutation (GSAWM was adopted for the design of an 8th-order infinite impulse response (IIR filter. GSA is based on the interaction of masses situated in a small isolated world guided by the approximation of Newtonian’s laws of gravity and motion. Each mass is represented by four parameters, namely, position, active, passive and inertia mass. The position of the heaviest mass gives the near optimal solution. For better exploitation in multidimensional search spaces, the WM strategy is applied to randomly selected particles that enhance the capability of GSA for finding better near optimal solutions. An extensive simulation study of low-pass (LP, high-pass (HP, band-pass (BP and band-stop (BS IIR filters unleashes the potential of GSAWM in achieving better cut-off frequency sharpness, smaller pass band and stop band ripples, smaller transition width and higher stop band attenuation with assured stability.
Baumes, Laurent A
2006-01-01
One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
Optimized blind gamma-ray pulsar searches at fixed computing budget
Pletsch, Holger J.; Clark, Colin J.
2014-01-01
The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fully coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.
Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm
V. D. Sulimov
2014-01-01
Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search
Pattern Nulling of Linear Antenna Arrays Using Backtracking Search Optimization Algorithm
Kerim Guney
2015-01-01
Full Text Available An evolutionary method based on backtracking search optimization algorithm (BSA is proposed for linear antenna array pattern synthesis with prescribed nulls at interference directions. Pattern nulling is obtained by controlling only the amplitude, position, and phase of the antenna array elements. BSA is an innovative metaheuristic technique based on an iterative process. Various numerical examples of linear array patterns with the prescribed single, multiple, and wide nulls are given to illustrate the performance and flexibility of BSA. The results obtained by BSA are compared with the results of the following seventeen algorithms: particle swarm optimization (PSO, genetic algorithm (GA, modified touring ant colony algorithm (MTACO, quadratic programming method (QPM, bacterial foraging algorithm (BFA, bees algorithm (BA, clonal selection algorithm (CLONALG, plant growth simulation algorithm (PGSA, tabu search algorithm (TSA, memetic algorithm (MA, nondominated sorting GA-2 (NSGA-2, multiobjective differential evolution (MODE, decomposition with differential evolution (MOEA/D-DE, comprehensive learning PSO (CLPSO, harmony search algorithm (HSA, seeker optimization algorithm (SOA, and mean variance mapping optimization (MVMO. The simulation results show that the linear antenna array synthesis using BSA provides low side-lobe levels and deep null levels.
Optimization of fuel cells for BWR based in Tabu modified search
Martin del Campo M, C.; Francois L, J.L.; Palomera P, M.A.
2004-01-01
The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)
Derivation of Optimal Cropping Pattern in Part of Hirakud Command using Cuckoo Search
Rath, Ashutosh; Biswal, Sudarsan; Samantaray, Sandeep; Swain, Prakash Chandra, PROF.
2017-08-01
The economicgrowth of a Nation depends on agriculture which relies on the obtainable water resources, available land and crops. The contribution of water in an appropriate quantity at appropriate time plays avitalrole to increase the agricultural production. Optimal utilization of available resources can be achieved by proper planning and management of water resources projects and adoption of appropriate technology. In the present work, the command area of Sambalpur distribrutary System is taken up for investigation. Further, adoption of a fixed cropping pattern causes the reduction of yield. The present study aims at developing different crop planning strategies to increase the net benefit from the command area with minimum investment. Optimization models are developed for Kharif season using LINDO and Cuckoo Search (CS) algorithm for maximization of the net benefits. In process of development of Optimization model the factors such as cultivable land, seeds, fertilizers, man power, water cost, etc. are taken as constraints. The irrigation water needs of major crops and the total available water through canals in the command of Sambalpur Distributary are estimated. LINDO and Cuckoo Search models are formulated and used to derive the optimal cropping pattern yielding maximum net benefits. The net benefits of Rs.585.0 lakhs in Kharif Season are obtained by adopting LINGO and 596.07 lakhs from Cuckoo Search, respectively, whereas the net benefits of 447.0 lakhs is received by the farmers of the locality with the adopting present cropping pattern.
Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm
Norlina Mohd Sabri
2016-06-01
Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.
MTR core loading pattern optimization using burnup dependent group constants
Iqbal Masood
2008-01-01
Full Text Available A diffusion theory based MTR fuel management methodology has been developed for finding superior core loading patterns at any stage for MTR systems, keeping track of burnup of individual fuel assemblies throughout their history. It is based on using burnup dependent group constants obtained by the WIMS-D/4 computer code for standard fuel elements and control fuel elements. This methodology has been implemented in a computer program named BFMTR, which carries out detailed five group diffusion theory calculations using the CITATION code as a subroutine. The core-wide spatial flux and power profiles thus obtained are used for calculating the peak-to-average power and flux-ratios along with the available excess reactivity of the system. The fuel manager can use the BFMTR code for loading pattern optimization for maximizing the excess reactivity, keeping the peak-to-average power as well as flux-ratio within constraints. The results obtained by the BFMTR code have been found to be in good agreement with the corresponding experimental values for the equilibrium core of the Pakistan Research Reactor-1.
Age differences in visual search for compound patterns: long- versus short-range grouping.
Burack, J A; Enns, J T; Iarocci, G; Randolph, B
2000-11-01
Visual search for compound patterns was examined in observers aged 6, 8, 10, and 22 years. The main question was whether age-related improvement in search rate (response time slope over number of items) was different for patterns defined by short- versus long-range spatial relations. Perceptual access to each type of relation was varied by using elements of same contrast (easy to access) or mixed contrast (hard to access). The results showed large improvements with age in search rate for long-range targets; search rate for short-range targets was fairly constant across age. This pattern held regardless of whether perceptual access to a target was easy or hard, supporting the hypothesis that different processes are involved in perceptual grouping at these two levels. The results also point to important links between ontogenic and microgenic change in perception (H. Werner, 1948, 1957).
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm
Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI
2006-01-01
This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for the refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)
Forecasting solar radiation using an optimized hybrid model by Cuckoo Search algorithm
Wang, Jianzhou; Jiang, He; Wu, Yujie; Dong, Yao
2015-01-01
Due to energy crisis and environmental problems, it is very urgent to find alternative energy sources nowadays. Solar energy, as one of the great potential clean energies, has widely attracted the attention of researchers. In this paper, an optimized hybrid method by CS (Cuckoo Search) on the basis of the OP-ELM (Optimally Pruned Extreme Learning Machine), called CS-OP-ELM, is developed to forecast clear sky and real sky global horizontal radiation. First, MRSR (Multiresponse Sparse Regression) and LOO-CV (leave-one-out cross-validation) can be applied to rank neurons and prune the possibly meaningless neurons of the FFNN (Feed Forward Neural Network), respectively. Then, Direct strategy and Direct-Recursive strategy based on OP-ELM are introduced to build a hybrid model. Furthermore, CS (Cuckoo Search) optimized algorithm is employed to determine the proper weight coefficients. In order to verify the effectiveness of the developed method, hourly solar radiation data from six sites of the United States has been collected, and methods like ARMA (Autoregression moving average), BP (Back Propagation) neural network and OP-ELM can be compared with CS-OP-ELM. Experimental results show the optimized hybrid method CS-OP-ELM has the best forecasting performance. - Highlights: • An optimized hybrid method called CS-OP-ELM is proposed to forecast solar radiation. • CS-OP-ELM adopts multiple variables dataset as input variables. • Direct and Direct-Recursive strategy are introduced to build a hybrid model. • CS (Cuckoo Search) algorithm is used to determine the optimal weight coefficients. • The proposed method has the best performance compared with other methods
Optimal strategy for selling on group-buying website
Xuan Jiang
2014-09-01
Full Text Available Purpose: The purpose of this paper is to help business marketers with offline channels to make decisions on whether to sell through Group-buying (GB websites and how to set online price with the coordination of maximum deal size on GB websites. Design/methodology/approach: Considering the deal structure of GB websites especially for the service fee and minimum deal size limit required by GB websites, advertising effect of selling on GB websites, and interaction between online and offline markets, an analytical model is built to derive optimal online price and maximum deal size for sellers selling through GB website. This paper aims to answer four research questions: (1 How to make a decision on maximum deal size with coordination of the deal price? (2 Will selling on GB websites always be better than staying with offline channel only? (3 What kind of products is more appropriate to sell on GB website? (4How could GB website operator induce sellers to offer deep discount in GB deals? Findings and Originality/value: This paper obtains optimal strategies for sellers selling on GB website and finds that: Even if a seller has sufficient capacity, he/she may still set a maximum deal size on the GB deal to take advantage of Advertisement with Limited Availability (ALA effect; Selling through GB website may not bring a higher profit than selling only through offline channel when a GB site only has a small consumer base and/or if there is a big overlap between the online and offline markets; Low margin products are more suitable for being sold online with ALA strategies (LP-ALA or HP-ALA than high margin ones; A GB site operator could set a small minimum deal size to induce deep discounts from the sellers selling through GB deals. Research limitations/implications: The present study assumed that the demand function is determinate and linear. It will be interesting to study how stochastic demand and a more general demand function affect the optimal
Perceptual grouping and attention in visual search for features and for objects.
Treisman, A
1982-04-01
This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.
Dose constraint implementation in AREVA group: an optimization tool
Decobert, Veronique
2008-01-01
AREVA offers customers reliable technology solutions for CO 2 free power generation and electricity transmission and distribution. The group counts 68000 employees worldwide and for its nuclear activities there are about 33.000 people who work under ionizing radiation. Risk management and prevention is one of the ten engagements of the sustainable development policy of AREVA, to establish and maintain the highest level of nuclear and occupational safety in all of the group's operations to preserve public and worker health, and to protect the environment. The implementation of these engagements is founded on a voluntary continuous improvement program, AREVA Way: objectives, common for the all entities, are laid down in the policies documents. Indicators are defined and a common reporting method for each indicator and the result of performance self-assessment is set up. AREVA chose to federate the whole of the nuclear entities around a common policy, the Nuclear Safety Charter, implemented at the beginning of 2005. This charter sets up principles of organization, action and engagements of transparency. Regarding radiation protection, the Charter reaffirms the engagement to limit in the installations of the group, at a level as low as reasonably possible, the exposure of the workers, through the implementation of the ALARA principle and the implementation of a continuous improvement policy. This approach, basically different from the simple respect of imposed limits, radically modifies the dynamics of progress. In the activities of engineering, the optimization of protection against radiation is also integrated in the design, by taking account the experience feedback of the operational activities. This determination of constraints is taken on all levels of the organization. Thus sustainable development performance indicators and especially those relating to protection against radiation are discussed between the managers in charge of Units Business and the Top managers
Reza Sirjani
2017-09-01
Full Text Available Currently, many wind farms exist throughout the world and, in some cases, supply a significant portion of energy to networks. However, numerous uncertainties remain with respect to the amount of energy generated by wind turbines and other sophisticated operational aspects, such as voltage and reactive power management, which requires further development and consideration. To fix the problem of poor reactive power compensation in wind farms, optimal capacitor placement has been proposed in existing wind farms as a simple and relatively inexpensive method. However, the use of induction generators, transformers, and additional capacitors represent potential problems for the harmonics of a system and therefore must be taken into account at wind farms. The optimal location and size of capacitors at buses of an 80-MW wind farm were determined according to modelled wind speed, system equivalent circuits, and harmonics in order to minimize energy losses, optimize reactive power and reduce the management costs. The discrete version of the lightning search algorithm (DLSA is a powerful and flexible nature-inspired optimization technique that was developed and implemented herein for optimal capacitor placement in wind farms. The obtained results are compared with the results of the genetic algorithm (GA and the discrete harmony search algorithm (DHSA.
All roads lead to Rome - New search methods for the optimal triangulation problem
Ottosen, T. J.; Vomlel, Jiří
2012-01-01
Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf
An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization
Lihong Guo
2013-01-01
Full Text Available A hybrid metaheuristic approach by hybridizing harmony search (HS and firefly algorithm (FA, namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.
Yukai Yao
2015-01-01
Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.
Effects of systematic phase errors on optimized quantum random-walk search algorithm
Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun
2015-01-01
This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)
Haynes R Brian
2004-06-01
Full Text Available Abstract Background Clinical end users of MEDLINE have a difficult time retrieving articles that are both scientifically sound and directly relevant to clinical practice. Search filters have been developed to assist end users in increasing the success of their searches. Many filters have been developed for the literature on therapy and reviews but little has been done in the area of prognosis. The objective of this study is to determine how well various methodologic textwords, Medical Subject Headings, and their Boolean combinations retrieve methodologically sound literature on the prognosis of health disorders in MEDLINE. Methods An analytic survey was conducted, comparing hand searches of journals with retrievals from MEDLINE for candidate search terms and combinations. Six research assistants read all issues of 161 journals for the publishing year 2000. All articles were rated using purpose and quality indicators and categorized into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized as 'pass' or 'fail' for methodologic rigor in the areas of prognosis and other clinical topics. Candidate search strategies were developed for prognosis and run in MEDLINE – the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results 12% of studies classified as prognosis met basic criteria for scientific merit for testing clinical applications. Combinations of terms reached peak sensitivities of 90%. Compared with the best single term, multiple terms increased sensitivity for sound studies by 25.2% (absolute increase, and increased specificity, but by a much smaller amount (1.1% when sensitivity was maximized. Combining terms to optimize both sensitivity and specificity achieved sensitivities and specificities of approximately 83% for each. Conclusion Empirically derived
Optimally setting up directed searches for continuous gravitational waves in Advanced LIGO O1 data
Ming, Jing; Papa, Maria Alessandra; Krishnan, Badri; Prix, Reinhard; Beer, Christian; Zhu, Sylvia J.; Eggenstein, Heinz-Bernd; Bock, Oliver; Machenschalk, Bernd
2018-02-01
In this paper we design a search for continuous gravitational waves from three supernova remnants: Vela Jr., Cassiopeia A (Cas A) and G347.3. These systems might harbor rapidly rotating neutron stars emitting quasiperiodic gravitational radiation detectable by the advanced LIGO detectors. Our search is designed to use the volunteer computing project Einstein@Home for a few months and assumes the sensitivity and duty cycles of the advanced LIGO detectors during their first science run. For all three supernova remnants, the sky positions of their central compact objects are well known but the frequency and spin-down rates of the neutron stars are unknown which makes the searches computationally limited. In a previous paper we have proposed a general framework for deciding on what target we should spend computational resources and in what proportion, what frequency and spin-down ranges we should search for every target, and with what search setup. Here we further expand this framework and apply it to design a search directed at detecting continuous gravitational wave signals from the most promising three supernova remnants identified as such in the previous work. Our optimization procedure yields broad frequency and spin-down searches for all three objects, at an unprecedented level of sensitivity: The smallest detectable gravitational wave strain h0 for Cas A is expected to be 2 times smaller than the most sensitive upper limits published to date, and our proposed search, which was set up and ran on the volunteer computing project Einstein@Home, covers a much larger frequency range.
Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments
Kurt Derr; Milos Manic
2009-05-01
Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.
Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi
2017-05-04
Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
Ali Wagdy Mohamed
2014-11-01
Full Text Available In this paper, a novel version of Differential Evolution (DE algorithm based on a couple of local search mutation and a restart mechanism for solving global numerical optimization problems over continuous space is presented. The proposed algorithm is named as Restart Differential Evolution algorithm with Local Search Mutation (RDEL. In RDEL, inspired by Particle Swarm Optimization (PSO, a novel local mutation rule based on the position of the best and the worst individuals among the entire population of a particular generation is introduced. The novel local mutation scheme is joined with the basic mutation rule through a linear decreasing function. The proposed local mutation scheme is proven to enhance local search tendency of the basic DE and speed up the convergence. Furthermore, a restart mechanism based on random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme is combined to avoid stagnation and/or premature convergence. Additionally, an exponent increased crossover probability rule and a uniform scaling factors of DE are introduced to promote the diversity of the population and to improve the search process, respectively. The performance of RDEL is investigated and compared with basic differential evolution, and state-of-the-art parameter adaptive differential evolution variants. It is discovered that the proposed modifications significantly improve the performance of DE in terms of quality of solution, efficiency and robustness.
A. K. M. Foysal Ahmed
2018-03-01
Full Text Available The classical capacitated vehicle routing problem (CVRP is a very popular combinatorial optimization problem in the field of logistics and supply chain management. Although CVRP has drawn interests of many researchers, no standard way has been established yet to obtain best known solutions for all the different problem sets. We propose an efficient algorithm Bilayer Local Search-based Particle Swarm Optimization (BLS-PSO along with a novel decoding method to solve CVRP. Decoding method is important to relate the encoded particle position to a feasible CVRP solution. In bilayer local search, one layer of local search is for the whole population in any iteration whereas another one is applied only on the pool of the best particles generated in different generations. Such searching strategies help the BLS-PSO to perform better than the existing proposals by obtaining best known solutions for most of the existing benchmark problems within very reasonable computational time. Computational results also show that the performance achieved by the proposed algorithm outperforms other PSO-based approaches.
Fast Optimal Replica Placement with Exhaustive Search Using Dynamically Reconfigurable Processor
Hidetoshi Takeshita
2011-01-01
Full Text Available This paper proposes a new replica placement algorithm that expands the exhaustive search limit with reasonable calculation time. It combines a new type of parallel data-flow processor with an architecture tuned for fast calculation. The replica placement problem is to find a replica-server set satisfying service constraints in a content delivery network (CDN. It is derived from the set cover problem which is known to be NP-hard. It is impractical to use exhaustive search to obtain optimal replica placement in large-scale networks, because calculation time increases with the number of combinations. To reduce calculation time, heuristic algorithms have been proposed, but it is known that no heuristic algorithm is assured of finding the optimal solution. The proposed algorithm suits parallel processing and pipeline execution and is implemented on DAPDNA-2, a dynamically reconfigurable processor. Experiments show that the proposed algorithm expands the exhaustive search limit by the factor of 18.8 compared to the conventional algorithm search limit running on a Neumann-type processor.
Zhiwei Ye
2015-01-01
Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Scalable unit commitment by memory-bounded ant colony optimization with A{sup *} local search
Saber, Ahmed Yousuf; Alshareef, Abdulaziz Mohammed [Department of Electrical and Computer Engineering, King Abdulaziz University, P.O. Box 80204, Jeddah 21589 (Saudi Arabia)
2008-07-15
Ant colony optimization (ACO) is successfully applied in optimization problems. Performance of the basic ACO for small problems with moderate dimension and searching space is satisfactory. As the searching space grows exponentially in the large-scale unit commitment problem, the basic ACO is not applicable for the vast size of pheromone matrix of ACO in practical time and physical computer-memory limit. However, memory-bounded methods prune the least-promising nodes to fit the system in computer memory. Therefore, the authors propose memory-bounded ant colony optimization (MACO) in this paper for the scalable (no restriction for system size) unit commitment problem. This MACO intelligently solves the limitation of computer memory, and does not permit the system to grow beyond a bound on memory. In the memory-bounded ACO implementation, A{sup *} heuristic is introduced to increase local searching ability and probabilistic nearest neighbor method is applied to estimate pheromone intensity for the forgotten value. Finally, the benchmark data sets and existing methods are used to show the effectiveness of the proposed method. (author)
Shouheng Tuo
2013-01-01
Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.
2008-01-01
Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…
Optimal Control Strategy Search Using a Simplest 3-D PWR Xenon Oscillation Simulator
Yoichiro, Shimazu
2004-01-01
Power spatial oscillations due to the transient xenon spatial distribution are well known as xenon oscillation in large PWRs. When the reactor size becomes larger than the current design, then even radial oscillations can be also divergent. Even if the radial oscillation is convergent, when some control rods malfunction occurs, it is necessary to suppress the oscillation in as short time as possible. In such cases, optimal control strategy is required. Generally speaking the optimality search based on the modern control theory requires a lot of calculation for the evaluation of state variables. In the case of control rod malfunctions the xenon oscillation could be three dimensional. In such case, direct core calculations would be inevitable. From this point of view a very simple model, only four point reactor model, has been developed and verified. In this paper, an example of a procedure and the results for optimal control strategy search are presented. It is shown that we have only one optimal strategy within a half cycle of the oscillation with fixed control strength. It is also shown that a 3-D xenon oscillation introduced by a control rod malfunction can not be controlled by only one control step as can be done for axial oscillations. They might be quite strong limitations to the operators. Thus it is recommended that a strategy generator, which is quick in analyzing and easy to use, might be installed in a monitoring system or operator guiding system. (author)
Search method optimization technique for thermal design of high power RFQ structure
Sharma, N.K.; Joshi, S.C.
2009-01-01
RRCAT has taken up the development of 3 MeV RFQ structure for the low energy part of 100 MeV H - ion injector linac. RFQ is a precision machined resonating structure designed for high rf duty factor. RFQ structural stability during high rf power operation is an important design issue. The thermal analysis of RFQ has been performed using ANSYS finite element analysis software and optimization of various parameters is attempted using Search Method optimization technique. It is an effective optimization technique for the systems governed by a large number of independent variables. The method involves examining a number of combinations of values of independent variables and drawing conclusions from the magnitude of the objective function at these combinations. In these methods there is a continuous improvement in the objective function throughout the course of the search and hence these methods are very efficient. The method has been employed in optimization of various parameters (called independent variables) of RFQ like cooling water flow rate, cooling water inlet temperatures, cavity thickness etc. involved in RFQ thermal design. The temperature rise within RFQ structure is the objective function during the thermal design. Using ANSYS Programming Development Language (APDL), various multiple iterative programmes are written and the analysis are performed to minimize the objective function. The dependency of the objective function on various independent variables is established and the optimum values of the parameters are evaluated. The results of the analysis are presented in the paper. (author)
Genetic search for an optimal power flow solution from a high density cluster
Amarnath, R.V. [Hi-Tech College of Engineering and Technology, Hyderabad (India); Ramana, N.V. [JNTU College of Engineering, Jagityala (India)
2008-07-01
This paper proposed a novel method to solve optimal power flow (OPF) problems. The method is based on a genetic algorithm (GA) search from a High Density Cluster (GAHDC). The algorithm of the proposed method includes 3 stages, notably (1) a suboptimal solution is obtained via a conventional analytical method, (2) a high density cluster, which consists of other suboptimal data points from the first stage, is formed using a density-based cluster algorithm, and (3) a genetic algorithm based search is carried out for the exact optimal solution from a low population sized, high density cluster. The final optimal solution thoroughly satisfies the well defined fitness function. A standard IEEE 30-bus test system was considered for the simulation study. Numerical results were presented and compared with the results of other approaches. It was concluded that although there is not much difference in numerical values, the proposed method has the advantage of minimal computational effort and reduced CPU time. As such, the method would be suitable for online applications such as the present Optimal Power Flow problem. 24 refs., 2 tabs., 4 figs.
Optimization of refueling-shuffling scheme in PWR core by random search strategy
Wu Yuan
1991-11-01
A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost
Group of Hexagonal Search Patterns for Motion Estimation and Object Tracking
Elazm, A.A.; Mahmoud, I.I; Hashima, S.M.
2010-01-01
This paper presents a group of fast block matching algorithms based on the hexagon pattern search .A new predicted one point hexagon (POPHEX) algorithm is proposed and compared with other well known algorithms. The comparison of these algorithms and our proposed one is performed for both motion estimation and object tracking. Test video sequences are used to demonstrate the behavior of studied algorithms. All algorithms are implemented in MATLAB environment .Experimental results showed that the proposed algorithm posses less number of search points however its computational overhead is little increased due to prediction procedure.
A dynamic lattice searching method with rotation operation for optimization of large clusters
Wu Xia; Cai Wensheng; Shao Xueguang
2009-01-01
Global optimization of large clusters has been a difficult task, though much effort has been paid and many efficient methods have been proposed. During our works, a rotation operation (RO) is designed to realize the structural transformation from decahedra to icosahedra for the optimization of large clusters, by rotating the atoms below the center atom with a definite degree around the fivefold axis. Based on the RO, a development of the previous dynamic lattice searching with constructed core (DLSc), named as DLSc-RO, is presented. With an investigation of the method for the optimization of Lennard-Jones (LJ) clusters, i.e., LJ 500 , LJ 561 , LJ 600 , LJ 665-667 , LJ 670 , LJ 685 , and LJ 923 , Morse clusters, silver clusters by Gupta potential, and aluminum clusters by NP-B potential, it was found that both the global minima with icosahedral and decahedral motifs can be obtained, and the method is proved to be efficient and universal.
Rabindra Kumar Sahu
2014-09-01
Full Text Available An attempt is made for the effective application of Gravitational Search Algorithm (GSA to optimize PI/PIDF controller parameters in Automatic Generation Control (AGC of interconnected power systems. Initially, comparison of several conventional objective functions reveals that ITAE yields better system performance. Then, the parameters of GSA technique are properly tuned and the GSA control parameters are proposed. The superiority of the proposed approach is demonstrated by comparing the results of some recently published techniques such as Differential Evolution (DE, Bacteria Foraging Optimization Algorithm (BFOA and Genetic Algorithm (GA. Additionally, sensitivity analysis is carried out that demonstrates the robustness of the optimized controller parameters to wide variations in operating loading condition and time constants of speed governor, turbine, tie-line power. Finally, the proposed approach is extended to a more realistic power system model by considering the physical constraints such as reheat turbine, Generation Rate Constraint (GRC and Governor Dead Band nonlinearity.
Lee, Ho Min; Sadollah, Ali
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply. PMID:25874252
Do Guen Yoo
2015-01-01
Full Text Available Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6. The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply.
Algorithm of axial fuel optimization based in progressive steps of turned search
Martin del Campo, C.; Francois, J.L.
2003-01-01
The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)
Advanced Harmony Search with Ant Colony Optimization for Solving the Traveling Salesman Problem
Ho-Yoeng Yun
2013-01-01
Full Text Available We propose a novel heuristic algorithm based on the methods of advanced Harmony Search and Ant Colony Optimization (AHS-ACO to effectively solve the Traveling Salesman Problem (TSP. The TSP, in general, is well known as an NP-complete problem, whose computational complexity increases exponentially by increasing the number of cities. In our algorithm, Ant Colony Optimization (ACO is used to search the local optimum in the solution space, followed by the use of the Harmony Search to escape the local optimum determined by the ACO and to move towards a global optimum. Experiments were performed to validate the efficiency of our algorithm through a comparison with other algorithms and the optimum solutions presented in the TSPLIB. The results indicate that our algorithm is capable of generating the optimum solution for most instances in the TSPLIB; moreover, our algorithm found better solutions in two cases (kroB100 and pr144 when compared with the optimum solution presented in the TSPLIB.
Jian Tang
2017-11-01
Full Text Available In this paper, we optimize the search and rescue (SAR in disaster relief through agent-based simulation. We simulate rescue teams’ search behaviors with the improved Truncated Lévy walks. Then we propose a cooperative rescue plan based on a distributed auction mechanism, and illustrate it with the case of landslide disaster relief. The simulation is conducted in three scenarios, including “fatal”, “serious” and “normal”. Compared with the non-cooperative rescue plan, the proposed rescue plan in this paper would increase victims’ relative survival probability by 7–15%, increase the ratio of survivors getting rescued by 5.3–12.9%, and decrease the average elapsed time for one site getting rescued by 16.6–21.6%. The robustness analysis shows that search radius can affect the rescue efficiency significantly, while the scope of cooperation cannot. The sensitivity analysis shows that the two parameters, the time limit for completing rescue operations in one buried site and the maximum turning angle for next step, both have a great influence on rescue efficiency, and there exists optimal value for both of them in view of rescue efficiency.
Intermittent random walks for an optimal search strategy: one-dimensional case
Oshanin, G; Wio, H S; Lindenberg, K; Burlatsky, S F
2007-01-01
We study the search kinetics of an immobile target by a concentration of randomly moving searchers. The object of the study is to optimize the probability of detection within the constraints of our model. The target is hidden on a one-dimensional lattice in the sense that searchers have no a priori information about where it is, and may detect it only upon encounter. The searchers perform random walks in discrete time n = 0,1,2,...,N, where N is the maximal time the search process is allowed to run. With probability α the searchers step on a nearest-neighbour, and with probability (1-α) they leave the lattice and stay off until they land back on the lattice at a fixed distance L away from the departure point. The random walk is thus intermittent. We calculate the probability P N that the target remains undetected up to the maximal search time N, and seek to minimize this probability. We find that P N is a non-monotonic function of α, and show that there is an optimal choice α opt (N) of α well within the intermittent regime, 0 opt (N) N can be orders of magnitude smaller compared to the 'pure' random walk cases α = 0 and α = 1
Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.
Wei-Chang Yeh
Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.
Smallest-Small-World Cellular Harmony Search for Optimization of Unconstrained Benchmark Problems
Sung Soo Im
2013-01-01
Full Text Available We presented a new hybrid method that combines cellular harmony search algorithms with the Smallest-Small-World theory. A harmony search (HS algorithm is based on musical performance processes that occur when a musician searches for a better state of harmony. Harmony search has successfully been applied to a wide variety of practical optimization problems. Most of the previous researches have sought to improve the performance of the HS algorithm by changing the pitch adjusting rate and harmony memory considering rate. However, there has been a lack of studies to improve the performance of the algorithm by the formation of population structures. Therefore, we proposed an improved HS algorithm that uses the cellular automata formation and the topological structure of Smallest-Small-World network. The improved HS algorithm has a high clustering coefficient and a short characteristic path length, having good exploration and exploitation efficiencies. Nine benchmark functions were applied to evaluate the performance of the proposed algorithm. Unlike the existing improved HS algorithm, the proposed algorithm is expected to have improved algorithmic efficiency from the formation of the population structure.
Fast optimization of binary clusters using a novel dynamic lattice searching method
Wu, Xia; Cheng, Wen
2014-01-01
Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd) 79 clusters with DFT-fit parameters of Gupta potential
Optimization of Signal Region for Dark Matter Search at the ATLAS Detector
Yip, Long Sang Kenny
2015-01-01
This report focused on the optimization of signal region for the search of dark matter produced in proton-proton collision with final states of a single electron or muon, a minimum of four jets, one or two b-jets, and missing transverse momentum at least 100 GeV. A brute-force approach was proposed to scan for the optimal signal region in rectangularly discretized parameter space. Analysis of the leniency of signal regions motivated event-shortlisting and loop-breaking features that allowed efficient optimization of the signal region. With the refined algorithm for the brute-force search, the computation time slimmed from an estimation of three months to one hour, in a test run of a million Monte-Carlo simulated events over densely discretized parameter space of four million signal regions. Further studies could focus on manipulating random numbers, and the interplay between the maximal figure of merit and the lower bound imposed on the background.
Abdul Rani, Khairul Najmy; Abdulmalek, Mohamedfareq; A Rahim, Hasliza; Siew Chin, Neoh; Abd Wahab, Alawiyah
2017-04-20
This research proposes the various versions of modified cuckoo search (MCS) metaheuristic algorithm deploying the strength Pareto evolutionary algorithm (SPEA) multiobjective (MO) optimization technique in rectangular array geometry synthesis. Precisely, the MCS algorithm is proposed by incorporating the Roulette wheel selection operator to choose the initial host nests (individuals) that give better results, adaptive inertia weight to control the positions exploration of the potential best host nests (solutions), and dynamic discovery rate to manage the fraction probability of finding the best host nests in 3-dimensional search space. In addition, the MCS algorithm is hybridized with the particle swarm optimization (PSO) and hill climbing (HC) stochastic techniques along with the standard strength Pareto evolutionary algorithm (SPEA) forming the MCSPSOSPEA and MCSHCSPEA, respectively. All the proposed MCS-based algorithms are examined to perform MO optimization on Zitzler-Deb-Thiele's (ZDT's) test functions. Pareto optimum trade-offs are done to generate a set of three non-dominated solutions, which are locations, excitation amplitudes, and excitation phases of array elements, respectively. Overall, simulations demonstrates that the proposed MCSPSOSPEA outperforms other compatible competitors, in gaining a high antenna directivity, small half-power beamwidth (HPBW), low average side lobe level (SLL) suppression, and/or significant predefined nulls mitigation, simultaneously.
R. A. Swief
2018-01-01
Full Text Available This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of distributed generators in radial feeders for reliability improvement. Distributed generator affects reliability and system power losses and voltage profile. The volatility behaviour for both photovoltaic cells and the wind turbine farms affect the values and the selection of protection devices and distributed generators allocation. To improve reliability, the reconfiguration will take place before installing both protection devices and distributed generators. Assessment of consumer power system reliability is a vital part of distribution system behaviour and development. Distribution system reliability calculation will be relayed on probabilistic reliability indices, which can expect the disruption profile of a distribution system based on the volatility behaviour of added generators and load behaviour. The validity of the anticipated algorithm has been tested using a standard IEEE 69 bus system.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Evolution of optimal Lévy-flight strategies in human mental searches
Radicchi, Filippo; Baronchelli, Andrea
2012-06-01
Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
Search Greedy for radial fuel optimization; Busqueda Greddy para optimizacion radial de combustible
Ortiz, J. J.; Castillo, J. A. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico); Pelta, D. A. [Universidad de Granada, ETS Ingenieria Informatica y Telecomunicaciones, C/Daniel Saucedo Aranda s/n, 18071 Granada (Spain)]. e-mail: jjortiz@nuclear.inin.mx
2008-07-01
In this work a search algorithm Greedy is presented for the optimization of fuel cells in reactors BWR. As first phase a study was made of sensibility of the Factor of Pick of Local Power (FPPL) of the cell, in function of the exchange of the content of two fuel rods. His way it could settle down that then the rods to exchange do not contain gadolinium, small changes take place in the value of the FPPL of the cell. This knowledge was applied later in the search Greedy to optimize fuel cell. Exchanges of rods with gadolinium takes as a mechanism of global search and exchanges of rods without gadolinium takes as a method of local search. It worked with a cell of 10x10 rods and 2 circular water channels in center of the same one. From an inventory of enrichments of uranium and concentrations of given gadolinium and one distribution of well-known enrichments; the techniques finds good solutions that the FPPL minimizes, maintaining the factor of multiplication of neutrons in a range appropriate of values. In the low part of the assembly of a lot of recharge of a cycle of 18 months the cells were placed. The values of FPPL of the opposing cells are similar or smaller to those of the original cell and with behaviors in the nucleus also comparable to those obtained with the original cell. The evaluation of the cells was made with the code of transport CASMO-IV and the evaluation of the nucleus was made by means of the one simulator of the nucleus SIMULATE-3. (Author)
Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure
Bogani, C.; Gasparo, M. G.; Papini, A.
2009-07-01
We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.
Charles Tatkeu
2008-12-01
Full Text Available We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Vimal Savsani
2017-01-01
Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.
Fuel loading and control rod patterns optimization in a BWR using tabu search
Castillo, Alejandro; Ortiz, Juan Jose; Montes, Jose Luis; Perusquia, Raul
2007-01-01
This paper presents the QuinalliBT system, a new approach to solve fuel loading and control rod patterns optimization problem in a coupled way. This system involves three different optimization stages; in the first one, a seed fuel loading using the Haling principle is designed. In the second stage, the corresponding control rod pattern for the previous fuel loading is obtained. Finally, in the last stage, a new fuel loading is created, starting from the previous fuel loading and using the corresponding set of optimized control rod patterns. For each stage, a different objective function is considered. In order to obtain the decision parameters used in those functions, the CM-PRESTO 3D steady-state reactor core simulator was used. Second and third stages are repeated until an appropriate fuel loading and its control rod pattern are obtained, or a stop criterion is achieved. In all stages, the tabu search optimization technique was used. The QuinalliBT system was tested and applied to a real BWR operation cycle. It was found that the value for k eff obtained by QuinalliBT was 0.0024 Δk/k greater than that of the reference cycle
Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.
Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong
2017-08-09
For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem
Akhtar, Mahmuda; Hannan, M A; Begum, R A; Basri, Hassan; Scavino, Edgar
2017-03-01
Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO 2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO 2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal search strategies for detecting cost and economic studies in EMBASE
Haynes R Brian
2006-06-01
Full Text Available Abstract Background Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. Methods We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms
Ohtsuki, Yukiyoshi
2010-01-01
In this paper, molecular quantum computation is numerically studied with the quantum search algorithm (Grover's algorithm) by means of optimal control simulation. Qubits are implemented in the vibronic states of I 2 , while gate operations are realized by optimally designed laser pulses. The methodological aspects of the simulation are discussed in detail. We show that the algorithm for solving a gate pulse-design problem has the same mathematical form as a state-to-state control problem in the density matrix formalism, which provides monotonically convergent algorithms as an alternative to the Krotov method. The sequential irradiation of separately designed gate pulses leads to the population distribution predicted by Grover's algorithm. The computational accuracy is reduced by the imperfect quality of the pulse design and by the electronic decoherence processes that are modeled by the non-Markovian master equation. However, as long as we focus on the population distribution of the vibronic qubits, we can search a target state with high probability without introducing error-correction processes during the computation. A generalized gate pulse-design scheme to explicitly include decoherence effects is outlined, in which we propose a new objective functional together with its solution algorithm that guarantees monotonic convergence.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups
Cramer, Ronald; Fehr, Serge
2002-01-01
A black-box secret sharing scheme for the threshold access structure T t,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vectors...... are defined over ℤ and are designed independently of the group G from which the secret and the shares are sampled. This means that perfect completeness and perfect privacy are guaranteed regardless of which group G is chosen. We define the black-box secret sharing problem as the problem of devising......, for an arbitrary given T t,n , a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players n is minimal. Such schemes are relevant for instance in the context of distributed cryptosystems based on groups with secret or hard to compute group order...
Optimizing Earth Data Search Ranking using Deep Learning and Real-time User Behaviour
Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.; Greguska, F. R., III
2017-12-01
Finding Earth science data has been a challenging problem given both the quantity of data available and the heterogeneity of the data across a wide variety of domains. Current search engines in most geospatial data portals tend to induce end users to focus on one single data characteristic dimension (e.g., term frequency-inverse document frequency (TF-IDF) score, popularity, release date, etc.). This approach largely fails to take account of users' multidimensional preferences for geospatial data, and hence may likely result in a less than optimal user experience in discovering the most applicable dataset out of a vast range of available datasets. With users interacting with search engines, sufficient information is already hidden in the log files. Compared with explicit feedback data, information that can be derived/extracted from log files is virtually free and substantially more timely. In this dissertation, I propose an online deep learning framework that can quickly update the learning function based on real-time user clickstream data. The contributions of this framework include 1) a log processor that can ingest, process and create training data from web logs in a real-time manner; 2) a query understanding module to better interpret users' search intent using web log processing results and metadata; 3) a feature extractor that identifies ranking features representing users' multidimensional interests of geospatial data; and 4) a deep learning based ranking algorithm that can be trained incrementally using user behavior data. The search ranking results will be evaluated using precision at K and normalized discounted cumulative gain (NDCG).
Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.
Carr, V J; Dewis, S A; Lewin, T J
1998-07-27
This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.
Lin, Chaung; Hung, Shao-Chun
2013-01-01
Highlights: • An automatic multi-cycle core reload design tool, which searches the fresh fuel assembly composition, is developed. • The search method adopts particle swarm optimization and local search. • The design objectives are to achieve required cycle energy, minimum fuel cost, and the satisfactory constraints. • The constraints include the hot zero power moderator temperature coefficient and the hot channel factor. - Abstract: An automatic multi-cycle core reload design tool, which searches the fresh fuel assembly composition, is developed using particle swarm optimization and local search. The local search uses heuristic rules to change the current search result a little so that the result can be improved. The composition of the fresh fuel assemblies should provide the required cycle energy and satisfy the constraints, such as the hot zero power moderator temperature coefficient and the hot channel factor. Instead of designing loading pattern for each FA composition during search process, two fixed loading patterns are used to calculate the core status and the better fitness function value is used in the search process. The fitness function contains terms which reflect the design objectives such as cycle energy, constraints, and fuel cost. The results show that the developed tool can achieve the desire objective
Kota, Sujatha; Padmanabhuni, Venkata Nageswara Rao; Budda, Kishor; K, Sruthi
2018-05-01
Elliptic Curve Cryptography (ECC) uses two keys private key and public key and is considered as a public key cryptographic algorithm that is used for both authentication of a person and confidentiality of data. Either one of the keys is used in encryption and other in decryption depending on usage. Private key is used in encryption by the user and public key is used to identify user in the case of authentication. Similarly, the sender encrypts with the private key and the public key is used to decrypt the message in case of confidentiality. Choosing the private key is always an issue in all public key Cryptographic Algorithms such as RSA, ECC. If tiny values are chosen in random the security of the complete algorithm becomes an issue. Since the Public key is computed based on the Private Key, if they are not chosen optimally they generate infinity values. The proposed Modified Elliptic Curve Cryptography uses selection in either of the choices; the first option is by using Particle Swarm Optimization and the second option is by using Cuckoo Search Algorithm for randomly choosing the values. The proposed algorithms are developed and tested using sample database and both are found to be secured and reliable. The test results prove that the private key is chosen optimally not repetitive or tiny and the computations in public key will not reach infinity.
System modelling and online optimal management of MicroGrid using Mesh Adaptive Direct Search
Mohamed, Faisal A. [Department of Electrical Engineering, Omar Al-Mukhtar University, P.O. Box 919, El-Bieda (Libya); Koivo, Heikki N. [Department of Automation and Systems Technology, Helsinki University of Technology, P.O. Box 5500, FIN-02015 HUT (Finland)
2010-06-15
This paper presents a generalized formulation to determine the optimal operating strategy and cost optimization scheme for a MicroGrid. Prior to the optimization of the MicroGrid itself, models for the system components are determined using real data. The proposed cost function takes into consideration the costs of the emissions, NO{sub x}, SO{sub 2}, and CO{sub 2}, start-up costs, as well as the operation and maintenance costs. A daily income and outgo from sold or purchased power is also added. The MicroGrid considered in this paper consists of a wind turbine, a micro turbine, a diesel generator, a photovoltaic array, a fuel cell, and a battery storage. In this work, the Mesh Adaptive Direct Search (MADS) algorithm is used to minimize the cost function of the system while constraining it to meet the customer demand and safety of the system. In comparison with previously proposed techniques, a significant reduction is obtained. (author)
Near-optimal quantum circuit for Grover's unstructured search using a transverse field
Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui
2017-06-01
Inspired by a class of algorithms proposed by Farhi et al. (arXiv:1411.4028), namely, the quantum approximate optimization algorithm (QAOA), we present a circuit-based quantum algorithm to search for a needle in a haystack, obtaining the same quadratic speedup achieved by Grover's original algorithm. In our algorithm, the problem Hamiltonian (oracle) and a transverse field are applied alternately to the system in a periodic manner. We introduce a technique, based on spin-coherent states, to analyze the composite unitary in a single period. This composite unitary drives a closed transition between two states that have high degrees of overlap with the initial state and the target state, respectively. The transition rate in our algorithm is of order Θ (1 /√{N }) , and the overlaps are of order Θ (1 ) , yielding a nearly optimal query complexity of T ≃√{N }(π /2 √{2 }) . Our algorithm is a QAOA circuit that demonstrates a quantum advantage with a large number of iterations that is not derived from Trotterization of an adiabatic quantum optimization (AQO) algorithm. It also suggests that the analysis required to understand QAOA circuits involves a very different process from estimating the energy gap of a Hamiltonian in AQO.
Yuliang Su
2015-04-01
Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.
Shouheng Tuo
Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
Optimizing heliostat positions with local search metaheuristics using a ray tracing optical model
Reinholz, Andreas; Husenbeth, Christof; Schwarzbözl, Peter; Buck, Reiner
2017-06-01
The life cycle costs of solar tower power plants are mainly determined by the investment costs of its construction. Significant parts of these investment costs are used for the heliostat field. Therefore, an optimized placement of the heliostats gaining the maximal annual power production has a direct impact on the life cycle costs revenue ratio. We present a two level local search method implemented in MATLAB utilizing the Monte Carlo raytracing software STRAL [1] for the evaluation of the annual power output for a specific weighted annual time scheme. The algorithm was applied to a solar tower power plant (PS10) with 624 heliostats. Compared to former work of Buck [2], we were able to improve both runtime of the algorithm and quality of the output solutions significantly. Using the same environment for both algorithms, we were able to reach Buck's best solution with a speed up factor of about 20.
Saifullah Khalid
2016-09-01
Full Text Available Three conventional control constant instantaneous power control, sinusoidal current control, and synchronous reference frame techniques for extracting reference currents for shunt active power filters have been optimized using Fuzzy Logic control and Adaptive Tabu search Algorithm and their performances have been compared. Critical analysis of Comparison of the compensation ability of different control strategies based on THD and speed will be done, and suggestions will be given for the selection of technique to be used. The simulated results using MATLAB model are presented, and they will clearly prove the value of the proposed control method of aircraft shunt APF. The waveforms observed after the application of filter will be having the harmonics within the limits and the power quality will be improved.
Anton Güntsch
2009-09-01
Full Text Available Today’s specimen and observation data portals lack a flexible mechanism, able to link up thesaurus-enabled data sources such as taxonomic checklist databases and expand user queries to related terms, significantly enhancing result sets. The TOQE system (Thesaurus Optimized Query Expander is a REST-like XML web-service implemented in Python and designed for this purpose. Acting as an interface between portals and thesauri, TOQE allows the implementation of specialized portal systems with a set of thesauri supporting its specific focus. It is both easy to use for portal programmers and easy to configure for thesaurus database holders who want to expose their system as a service for query expansions. Currently, TOQE is used in four specimen and observation data portals. The documentation is available from http://search.biocase.org/toqe/.
An opposition-based harmony search algorithm for engineering optimization problems
Abhik Banerjee
2014-03-01
Full Text Available Harmony search (HS is a derivative-free real parameter optimization algorithm. It draws inspiration from the musical improvisation process of searching for a perfect state of harmony. The proposed opposition-based HS (OHS of the present work employs opposition-based learning for harmony memory initialization and also for generation jumping. The concept of opposite number is utilized in OHS to improve the convergence rate of the HS algorithm. The potential of the proposed algorithm is assessed by means of an extensive comparative study of the numerical results on sixteen benchmark test functions. Additionally, the effectiveness of the proposed algorithm is tested for reactive power compensation of an autonomous power system. For real-time reactive power compensation of the studied model, Takagi Sugeno fuzzy logic (TSFL is employed. Time-domain simulation reveals that the proposed OHS-TSFL yields on-line, off-nominal model parameters, resulting in real-time incremental change in terminal voltage response profile.
Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer
Mauro Castelli
2015-01-01
Full Text Available Energy consumption forecasting (ECF is an important policy issue in today’s economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.
Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)
2013-02-15
Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.
Braithwaite, Jason J.; Humphreys, Glyn W.; Hulleman, Johan; Watson, Derrick G.
2007-01-01
The authors report 4 experiments that examined color grouping and negative carryover effects in preview search via a probe detection task (J. J. Braithwaite, G. W. Humphreys, & J. Hodsoll, 2003). In Experiment 1, there was evidence of a negative color carryover from the preview to new items, using both search and probe detection measures. There…
Improving Search Strategies of Auditors – A Focus Group on Reflection Interventions
Fessl, Angela; Pammer, Viktoria; Wiese, Michael; Thalmann, Stefan
2017-01-01
Financial auditors routinely search internal as well as public knowledge bases as part of the auditing process. Efficient search strategies are crucial for knowledge workers in general and for auditors in particular. Modern search technology quickly evolves; and features beyond keyword search like fac-etted search or visual overview of knowledge bases like graph visualisations emerge. It is therefore desirable for auditors to learn about new innovations and to explore and experiment with such...
Ambush frequency should increase over time during optimal predator search for prey
Alpern, Steve; Fokkink, Robbert; Timmer, Marco; Casas, Jérôme
2011-01-01
We advance and apply the mathematical theory of search games to model the problem faced by a predator searching for prey. Two search modes are available: ambush and cruising search. Some species can adopt either mode, with their choice at a given time traditionally explained in terms of varying habitat and physiological conditions. We present an additional explanation of the observed predator alternation between these search modes, which is based on the dynamical nature of the search game the...
Dharmbir Prasad
2016-03-01
Full Text Available In this paper, symbiotic organisms search (SOS algorithm is proposed for the solution of optimal power flow (OPF problem of power system equipped with flexible ac transmission systems (FACTS devices. Inspired by interaction between organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. The performance of the proposed SOS algorithm is tested on the modified IEEE-30 bus and IEEE-57 bus test systems incorporating two types of FACTS devices, namely, thyristor controlled series capacitor and thyristor controlled phase shifter at fixed locations. The OPF problem of the present work is formulated with four different objective functions viz. (a fuel cost minimization, (b transmission active power loss minimization, (c emission reduction and (d minimization of combined economic and environmental cost. The simulation results exhibit the potential of the proposed SOS algorithm and demonstrate its effectiveness for solving the OPF problem of power system incorporating FACTS devices over the other evolutionary optimization techniques that surfaced in the recent state-of-the-art literature.
Francois, J.L.; Martin-del-Campo, C.; Francois, R.; Morales, L.B.
2003-01-01
An optimization procedure based on the tabu search (TS) method was developed for the design of radial enrichment and gadolinia distributions for boiling water reactor (BWR) fuel lattices. The procedure was coded in a computing system in which the optimization code uses the tabu search method to select potential solutions and the HELIOS code to evaluate them. The goal of the procedure is to search for an optimal fuel utilization, looking for a lattice with minimum average enrichment, with minimum deviation of reactivity targets and with a local power peaking factor (PPF) lower than a limit value. Time-dependent-depletion (TDD) effects were considered in the optimization process. The additive utility function method was used to convert the multiobjective optimization problem into a single objective problem. A strategy to reduce the computing time employed by the optimization was developed and is explained in this paper. An example is presented for a 10x10 fuel lattice with 10 different fuel compositions. The main contribution of this study is the development of a practical TDD optimization procedure for BWR fuel lattice design, using TS with a multiobjective function, and a strategy to economize computing time
Tu, Chengjian; Sheng, Quanhu; Li, Jun; Ma, Danjun; Shen, Xiaomeng; Wang, Xue; Shyr, Yu; Yi, Zhengping; Qu, Jun
2015-11-06
The two key steps for analyzing proteomic data generated by high-resolution MS are database searching and postprocessing. While the two steps are interrelated, studies on their combinatory effects and the optimization of these procedures have not been adequately conducted. Here, we investigated the performance of three popular search engines (SEQUEST, Mascot, and MS Amanda) in conjunction with five filtering approaches, including respective score-based filtering, a group-based approach, local false discovery rate (LFDR), PeptideProphet, and Percolator. A total of eight data sets from various proteomes (e.g., E. coli, yeast, and human) produced by various instruments with high-accuracy survey scan (MS1) and high- or low-accuracy fragment ion scan (MS2) (LTQ-Orbitrap, Orbitrap-Velos, Orbitrap-Elite, Q-Exactive, Orbitrap-Fusion, and Q-TOF) were analyzed. It was found combinations involving Percolator achieved markedly more peptide and protein identifications at the same FDR level than the other 12 combinations for all data sets. Among these, combinations of SEQUEST-Percolator and MS Amanda-Percolator provided slightly better performances for data sets with low-accuracy MS2 (ion trap or IT) and high accuracy MS2 (Orbitrap or TOF), respectively, than did other methods. For approaches without Percolator, SEQUEST-group performs the best for data sets with MS2 produced by collision-induced dissociation (CID) and IT analysis; Mascot-LFDR gives more identifications for data sets generated by higher-energy collisional dissociation (HCD) and analyzed in Orbitrap (HCD-OT) and in Orbitrap Fusion (HCD-IT); MS Amanda-Group excels for the Q-TOF data set and the Orbitrap Velos HCD-OT data set. Therefore, if Percolator was not used, a specific combination should be applied for each type of data set. Moreover, a higher percentage of multiple-peptide proteins and lower variation of protein spectral counts were observed when analyzing technical replicates using Percolator
HUBBLE SPACE TELESCOPE SNAPSHOT SEARCH FOR PLANETARY NEBULAE IN GLOBULAR CLUSTERS OF THE LOCAL GROUP
Bond, Howard E., E-mail: heb11@psu.edu [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States)
2015-04-15
Single stars in ancient globular clusters (GCs) are believed incapable of producing planetary nebulae (PNs), because their post-asymptotic-giant-branch evolutionary timescales are slower than the dissipation timescales for PNs. Nevertheless, four PNs are known in Galactic GCs. Their existence likely requires more exotic evolutionary channels, including stellar mergers and common-envelope binary interactions. I carried out a snapshot imaging search with the Hubble Space Telescope (HST) for PNs in bright Local Group GCs outside the Milky Way. I used a filter covering the 5007 Å nebular emission line of [O iii], and another one in the nearby continuum, to image 66 GCs. Inclusion of archival HST frames brought the total number of extragalactic GCs imaged at 5007 Å to 75, whose total luminosity slightly exceeds that of the entire Galactic GC system. I found no convincing PNs in these clusters, aside from one PN in a young M31 cluster misclassified as a GC, and two PNs at such large angular separations from an M31 GC that membership is doubtful. In a ground-based spectroscopic survey of 274 old GCs in M31, Jacoby et al. found three candidate PNs. My HST images of one of them suggest that the [O iii] emission actually arises from ambient interstellar medium rather than a PN; for the other two candidates, there are broadband archival UV HST images that show bright, blue point sources that are probably the PNs. In a literature search, I also identified five further PN candidates lying near old GCs in M31, for which follow-up observations are necessary to confirm their membership. The rates of incidence of PNs are similar, and small but nonzero, throughout the GCs of the Local Group.
A SYSTEMATIC SEARCH FOR X-RAY CAVITIES IN THE HOT GAS OF GALAXY GROUPS
Dong Ruobing; Rasmussen, Jesper; Mulchaey, John S.
2010-01-01
We have performed a systematic search for X-ray cavities in the hot gas of 51 galaxy groups with Chandra archival data. The cavities are identified based on two methods: subtracting an elliptical β-model fitted to the X-ray surface brightness, and performing unsharp masking. Thirteen groups in the sample (∼25%) are identified as clearly containing cavities, with another 13 systems showing tentative evidence for such structures. We find tight correlations between the radial and tangential radii of the cavities, and between their size and projected distance from the group center, in quantitative agreement with the case for more massive clusters. This suggests that similar physical processes are responsible for cavity evolution and disruption in systems covering a large range in total mass. We see no clear association between the detection of cavities and the current 1.4 GHz radio luminosity of the central brightest group galaxy, but there is a clear tendency for systems with a cool core to be more likely to harbor detectable cavities. To test the efficiency of the adopted cavity detection procedures, we employ a set of mock images designed to mimic typical Chandra data of our sample, and find that the model-fitting approach is generally more reliable than unsharp masking for recovering cavity properties. Finally, we find that the detectability of cavities is strongly influenced by a few factors, particularly the signal-to-noise ratio of the data, and that the real fraction of X-ray groups with prominent cavities could be substantially larger than the 25%-50% suggested by our analysis.
Ioannou, Lawrence M.; Travaglione, Benjamin C.
2006-01-01
We focus on determining the separability of an unknown bipartite quantum state ρ by invoking a sufficiently large subset of all possible entanglement witnesses given the expected value of each element of a set of mutually orthogonal observables. We review the concept of an entanglement witness from the geometrical point of view and use this geometry to show that the set of separable states is not a polytope and to characterize the class of entanglement witnesses (observables) that detect entangled states on opposite sides of the set of separable states. All this serves to motivate a classical algorithm which, given the expected values of a subset of an orthogonal basis of observables of an otherwise unknown quantum state, searches for an entanglement witness in the span of the subset of observables. The idea of such an algorithm, which is an efficient reduction of the quantum separability problem to a global optimization problem, was introduced by [Ioannou et al., Phys. Rev. A 70, 060303(R)], where it was shown to be an improvement on the naive approach for the quantum separability problem (exhaustive search for a decomposition of the given state into a convex combination of separable states). The last section of the paper discusses in more generality such algorithms, which, in our case, assume a subroutine that computes the global maximum of a real function of several variables. Despite this, we anticipate that such algorithms will perform sufficiently well on small instances that they will render a feasible test for separability in some cases of interest (e.g., in 3x3 dimensional systems)
Zhigang Lian
2010-01-01
Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Mohammed Abdullahi
Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Electroencephalography Signal Grouping and Feature Classification Using Harmony Search for BCI
Tae-Ju Lee
2013-01-01
Full Text Available This paper presents a heuristic method for electroencephalography (EEG grouping and feature classification using harmony search (HS for improving the accuracy of the brain-computer interface (BCI system. EEG, a noninvasive BCI method, uses many electrodes on the scalp, and a large number of electrodes make the resulting analysis difficult. In addition, traditional EEG analysis cannot handle multiple stimuli. On the other hand, the classification method using the EEG signal has a low accuracy. To solve these problems, we use a heuristic approach to reduce the complexities in multichannel problems and classification. In this study, we build a group of stimuli using the HS algorithm. Then, the features from common spatial patterns are classified by the HS classifier. To confirm the proposed method, we perform experiments using 64-channel EEG equipment. The subjects are subjected to three kinds of stimuli: audio, visual, and motion. Each stimulus is applied alone or in combination with the others. The acquired signals are processed by the proposed method. The classification results in an accuracy of approximately 63%. We conclude that the heuristic approach using the HS algorithm on the BCI is beneficial for EEG signal analysis.
Search for the optimally suited cantilever type for high-frequency MFM
Koblischka, M R; Wei, J D; Kirsch, M; Lessel, M; Pfeifer, R; Brust, M; Hartmann, U; Richter, C; Sulzbach, T
2007-01-01
To optimize the performance of the high-frequency MFM (HF-MFM) technique [1-4], we performed a search for the best suited cantilever type and magnetic material coating. Using a HF-MFM setup with hard disk writer poles as test samples, we carried out HF-MFM imaging at frequencies up to 2 GHz. For HF-MFM, it is an essential ingredient that the tip material can follow the fast switching of the high-frequency fields. In this contribution, we investigated 6 different types of cantilevers (i) the 'standard' MFM tip (Nanoworld Pointprobe) with 30 nm CoCr coating, (ii) a 'SSS' (Nanoworld SuperSharpSilicon TM ) cantilever with a 10 nm CoCr coating, (iii) a (Ni, Zn)-ferrite coated pointprobe tip (iv) a Ba 3 Co 2 Fe 23 O 41 (BCFO) coated pointprobe tip, (v) a low-coercivity NiCo alloy coated tip, and (vi) a permalloy-coated tip
Ammar Hussein Mutlag
2014-01-01
Full Text Available This paper presents an adaptive fuzzy logic controller (FLC design technique for photovoltaic (PV inverters using differential search algorithm (DSA. This technique avoids the exhaustive traditional trial and error procedure in obtaining membership functions (MFs used in conventional FLCs. This technique is implemented during the inverter design phase by generating adaptive MFs based on the evaluation results of the objective function formulated by the DSA. In this work, the mean square error (MSE of the inverter output voltage is used as an objective function. The DSA optimizes the MFs such that the inverter provides the lowest MSE for output voltage and improves the performance of the PV inverter output in terms of amplitude and frequency. The design procedure and accuracy of the optimum FLC are illustrated and investigated using simulations conducted for a 3 kW three-phase inverter in a MATLAB/Simulink environment. Results show that the proposed controller can successfully obtain the desired output when different linear and nonlinear loads are connected to the system. Furthermore, the inverter has reasonably low steady state error and fast response to reference variation.
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
A class-based search for the in-core fuel management optimization of a pressurized water reactor
Alvarenga de Moura Meneses, Anderson; Rancoita, Paola; Schirru, Roberto; Gambardella, Luca Maria
2010-01-01
The In-Core Fuel Management Optimization (ICFMO) is a prominent problem in nuclear engineering, with high complexity and studied for more than 40 years. Besides manual optimization and knowledge-based methods, optimization metaheuristics such as Genetic Algorithms, Ant Colony Optimization and Particle Swarm Optimization have yielded outstanding results for the ICFMO. In the present article, the Class-Based Search (CBS) is presented for application to the ICFMO. It is a novel metaheuristic approach that performs the search based on the main nuclear characteristics of the fuel assemblies, such as reactivity. The CBS is then compared to the one of the state-of-art algorithms applied to the ICFMO, the Particle Swarm Optimization. Experiments were performed for the optimization of Angra 1 Nuclear Power Plant, located at the Southeast of Brazil. The CBS presented noticeable performance, providing Loading Patterns that yield a higher average of Effective Full Power Days in the simulation of Angra 1 NPP operation, according to our methodology.
A class-based search for the in-core fuel management optimization of a pressurized water reactor
Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@lmp.ufrj.b [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); Rancoita, Paola [IDSIA (Dalle Molle Institute for Artificial Intelligence), Galleria 2, 6982 Manno-Lugano, TI (Switzerland); Mathematics Department, Universita degli Studi di Milano (Italy); Schirru, Roberto [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); Gambardella, Luca Maria [IDSIA (Dalle Molle Institute for Artificial Intelligence), Galleria 2, 6982 Manno-Lugano, TI (Switzerland)
2010-11-15
The In-Core Fuel Management Optimization (ICFMO) is a prominent problem in nuclear engineering, with high complexity and studied for more than 40 years. Besides manual optimization and knowledge-based methods, optimization metaheuristics such as Genetic Algorithms, Ant Colony Optimization and Particle Swarm Optimization have yielded outstanding results for the ICFMO. In the present article, the Class-Based Search (CBS) is presented for application to the ICFMO. It is a novel metaheuristic approach that performs the search based on the main nuclear characteristics of the fuel assemblies, such as reactivity. The CBS is then compared to the one of the state-of-art algorithms applied to the ICFMO, the Particle Swarm Optimization. Experiments were performed for the optimization of Angra 1 Nuclear Power Plant, located at the Southeast of Brazil. The CBS presented noticeable performance, providing Loading Patterns that yield a higher average of Effective Full Power Days in the simulation of Angra 1 NPP operation, according to our methodology.
Manungu Kiveni, Joseph [Syracuse Univ., NY (United States)
2012-12-01
This dissertation describes the results of a WIMP search using CDMS II data sets accumulated at the Soudan Underground Laboratory in Minnesota. Results from the original analysis of these data were published in 2009; two events were observed in the signal region with an expected leakage of 0.9 events. Further investigation revealed an issue with the ionization-pulse reconstruction algorithm leading to a software upgrade and a subsequent reanalysis of the data. As part of the reanalysis, I performed an advanced discrimination technique to better distinguish (potential) signal events from backgrounds using a 5-dimensional chi-square method. This dataanalysis technique combines the event information recorded for each WIMP-search event to derive a backgrounddiscrimination parameter capable of reducing the expected background to less than one event, while maintaining high efficiency for signal events. Furthermore, optimizing the cut positions of this 5-dimensional chi-square parameter for the 14 viable germanium detectors yields an improved expected sensitivity to WIMP interactions relative to previous CDMS results. This dissertation describes my improved (and optimized) discrimination technique and the results obtained from a blind application to the reanalyzed CDMS II WIMP-search data.
Homaifar, Abdollah; Esterline, Albert; Kimiaghalam, Bahram
2005-01-01
The Hybrid Projected Gradient-Evolutionary Search Algorithm (HPGES) algorithm uses a specially designed evolutionary-based global search strategy to efficiently create candidate solutions in the solution space...
Zhaoyu Chen
2018-01-01
Full Text Available The network planning is a key factor that directly affects the performance of the wireless networks. Distributed antenna system (DAS is an effective strategy for the network planning. This paper investigates the antenna deployment in a DAS for the high-speed railway communication networks and formulates an optimization problem which is NP-hard for achieving the optimal deployment of the antennas in the DAS. To solve this problem, a scheme based on an improved cuckoo search based on dimension cells (ICSDC algorithm is proposed. ICSDC introduces the dimension cell mechanism to avoid the internal dimension interferences in order to improve the performance of the algorithm. Simulation results show that the proposed ICSDC-based scheme obtains a lower network cost compared with the uniform network planning method. Moreover, ICSDC algorithm has better performance in terms of the convergence rate and accuracy compared with the conventional cuckoo search algorithm, the particle swarm optimization, and the firefly algorithm.
A search for the optimal duration of treatment with 6-mercaptopurine for ulcerative colitis.
Lobel, Efrat Z; Korelitz, Burton I; Xuereb, Mark A; Panagopoulos, Georgia
2004-03-01
6-mercaptopurine has proven to be effective in the treatment and maintenance of remission of ulcerative colitis (UC). The optimal duration of treatment with 6-MP is unknown. The intention of this study was to determine the best duration of treatment with 6-MP in terms of maintenance efficacy once remission has been achieved. We reviewed the records from the inflammatory bowel disease (IBD) center at Lenox Hill Hospital and one large IBD practice in New York City of 334 patients treated with 6-MP for UC. These patients were followed from 4 months to 28.7 yr. Sixty-one patients were treated with 6-MP for at least 6 months and had at least a 3-month disease-free interval off steroids while on the medication. These patients were divided into two groups: Group 1 continued 6-MP and group 2 discontinued the drug at various times for reasons other than relapse. Time to relapse was calculated for both groups. A Kaplan-Meier survival analysis was employed and differences between the two groups were analyzed using the log-rank test. The median time to relapse in group 2 was 24 wk and in group 1 was 58 wk (p products, dose of 6-MP during remission, duration of UC, and duration of treatment with 6-MP before remission was achieved. Discontinuation of treatment with 6-MP while UC is in remission leads to a higher relapse rate than maintenance on 6-MP. Therefore, we favor the indefinite treatment with 6-MP in most patients.
Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure
Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J. [Delft University of Technology, NL-2629 JB Delft (Netherlands)
1998-07-01
An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)
Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure
Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J.
1998-01-01
An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)
Weisz, Daniel R. [Department of Astronomy, University of California at Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Holtzman, Jon [Department of Astronomy, New Mexico State University, Box 30001, 1320 Frenger Street, Las Cruces, NM 88003 (United States); Gilbert, Karoline M.; Dalcanton, Julianne J.; Williams, Benjamin F., E-mail: drw@ucsc.edu [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States)
2014-07-10
We search for signatures of reionization in the star formation histories (SFHs) of 38 Local Group dwarf galaxies (10{sup 4} < M{sub *} < 10{sup 9} M{sub ☉}). The SFHs are derived from color-magnitude diagrams using archival Hubble Space Telescope/Wide Field Planetary Camera 2 imaging. Only five quenched galaxies (And V, And VI, And XIII, Leo IV, and Hercules) are consistent with forming the bulk of their stars before reionization, when full uncertainties are considered. Observations of 13 of the predicted 'true fossils' identified by Bovill and Ricotti show that only two (Hercules and Leo IV) indicate star formation quenched by reionization. However, both are within the virial radius of the Milky Way and evidence of tidal disturbance complicates this interpretation. We argue that the late-time gas capture scenario posited by Ricotti for the low mass, gas-rich, and star-forming fossil candidate Leo T is observationally indistinguishable from simple gas retention. Given the ambiguity between environmental effects and reionization, the best reionization fossil candidates are quenched low mass field galaxies (e.g., KKR 25).
Guaraldi, Federica; Parasiliti-Caprino, Mirko; Goggi, Riccardo; Beccuti, Guglielmo; Grottoli, Silvia; Arvat, Emanuela; Ghizzoni, Lucia; Ghigo, Ezio; Giordano, Roberta; Gori, Davide
2014-12-01
The exponential growth of scientific literature available through electronic databases (namely PubMed) has increased the chance of finding interesting articles. At the same time, search has become more complicated, time consuming, and at risk of missing important information. Therefore, optimized strategies have to be adopted to maximize searching impact. The aim of this study was to formulate efficient strings to search PubMed for etiologic associations between adrenal disorders (ADs) and other conditions. A comprehensive list of terms identifying endogenous conditions primarily affecting adrenals was compiled. An ad hoc analysis was performed to find the best way to express each term in order to find the highest number of potentially pertinent articles in PubMed. A predefined number of retrieved abstracts were read to assess their association with ADs' etiology. A more sensitive (providing the largest literature coverage) and a more specific (including only those terms retrieving >40 % of potentially pertinent articles) string were formulated. Various researches were performed to assess strings' ability to identify articles of interest in comparison with non-optimized literature searches. We formulated optimized, ready applicable tools for the identification of the literature assessing etiologic associations in the field of ADs using PubMed, and demonstrated the advantages deriving from their application. Detailed description of the methodological process is also provided, so that this work can easily be translated to other fields of practice.
Wang, Zhe; Li, Yanzhong
2015-01-01
Highlights: • The first application of IMOCS for plate-fin heat exchanger design. • Irreversibility degrees of heat transfer and fluid friction are minimized. • Trade-off of efficiency, total cost and pumping power is achieved. • Both EGM and EDM methods have been compared in the optimization of PFHE. • This study has superiority over other single-objective optimization design. - Abstract: This paper introduces and applies an improved multi-objective cuckoo search (IMOCS) algorithm, a novel met-heuristic optimization algorithm based on cuckoo breeding behavior, for the multi-objective optimization design of plate-fin heat exchangers (PFHEs). A modified irreversibility degree of the PFHE is separated into heat transfer and fluid friction irreversibility degrees which are adopted as two initial objective functions to be minimized simultaneously for narrowing the search scope of the design. The maximization efficiency, minimization of pumping power, and total annual cost are considered final objective functions. Results obtained from a two dimensional normalized Pareto-optimal frontier clearly demonstrate the trade-off between heat transfer and fluid friction irreversibility. Moreover, a three dimensional Pareto-optimal frontier reveals a relationship between efficiency, total annual cost, and pumping power in the PFHE design. Three examples presented here further demonstrate that the presented method is able to obtain optimum solutions with higher accuracy, lower irreversibility, and fewer iterations as compared to the previous methods and single-objective design approaches
Andy M Reynolds
2007-04-01
Full Text Available During their trajectories in still air, fruit flies (Drosophila melanogaster explore their landscape using a series of straight flight paths punctuated by rapid 90 degrees body-saccades [1]. Some saccades are triggered by visual expansion associated with collision avoidance. Yet many saccades are not triggered by visual cues, but rather appear spontaneously. Our analysis reveals that the control of these visually independent saccades and the flight intervals between them constitute an optimal scale-free active searching strategy. Two characteristics of mathematical optimality that are apparent during free-flight in Drosophila are inter-saccade interval lengths distributed according to an inverse square law, which does not vary across landscape scale, and 90 degrees saccade angles, which increase the likelihood that territory will be revisited and thereby reduce the likelihood that near-by targets will be missed. We also show that searching is intermittent, such that active searching phases randomly alternate with relocation phases. Behaviorally, this intermittency is reflected in frequently occurring short, slow speed inter-saccade intervals randomly alternating with rarer, longer, faster inter-saccade intervals. Searching patterns that scale similarly across orders of magnitude of length (i.e., scale-free have been revealed in animals as diverse as microzooplankton, bumblebees, albatrosses, and spider monkeys, but these do not appear to be optimised with respect to turning angle, whereas Drosophila free-flight search does. Also, intermittent searching patterns, such as those reported here for Drosophila, have been observed in foragers such as planktivorous fish and ground foraging birds. Our results with freely flying Drosophila may constitute the first reported example of searching behaviour that is both scale-free and intermittent.
M. Balasubbareddy
2015-12-01
Full Text Available A novel optimization algorithm is proposed to solve single and multi-objective optimization problems with generation fuel cost, emission, and total power losses as objectives. The proposed method is a hybridization of the conventional cuckoo search algorithm and arithmetic crossover operations. Thus, the non-linear, non-convex objective function can be solved under practical constraints. The effectiveness of the proposed algorithm is analyzed for various cases to illustrate the effect of practical constraints on the objectives' optimization. Two and three objective multi-objective optimization problems are formulated and solved using the proposed non-dominated sorting-based hybrid cuckoo search algorithm. The effectiveness of the proposed method in confining the Pareto front solutions in the solution region is analyzed. The results for single and multi-objective optimization problems are physically interpreted on standard test functions as well as the IEEE-30 bus test system with supporting numerical and graphical results and also validated against existing methods.
Wang, Yan; Huang, Song; Ji, Zhicheng
2017-07-01
This paper presents a hybrid particle swarm optimization and gravitational search algorithm based on hybrid mutation strategy (HGSAPSO-M) to optimize economic dispatch (ED) including distributed generations (DGs) considering market-based energy pricing. A daily ED model was formulated and a hybrid mutation strategy was adopted in HGSAPSO-M. The hybrid mutation strategy includes two mutation operators, chaotic mutation, Gaussian mutation. The proposed algorithm was tested on IEEE-33 bus and results show that the approach is effective for this problem.
Budilova, E. V.; Terekhin, A. T.; Chepurnov, S. A.
1994-09-01
A hypothetical neural scheme is proposed that ensures efficient decision making by an animal searching for food in a maze. Only the general structure of the network is fixed; its quantitative characteristics are found by numerical optimization that simulates the process of natural selection. Selection is aimed at maximization of the expected number of descendants, which is directly related to the energy stored during the reproductive cycle. The main parameters to be optimized are the increments of the interneuronal links and the working-memory constants.
Volpato, Enilze de Souza Nogueira; Betini, Marluci; Puga, Maria Eduarda; Agarwal, Arnav; Cataneo, Antônio José Maria; Oliveira, Luciane Dias de; Bazan, Rodrigo; Braz, Leandro Gobbo; Pereira, José Eduardo Guimarães; El Dib, Regina
2018-01-15
A high-quality electronic search is essential for ensuring accuracy and comprehensiveness among the records retrieved when conducting systematic reviews. Therefore, we aimed to identify the most efficient method for searching in both MEDLINE (through PubMed) and EMBASE, covering search terms with variant spellings, direct and indirect orders, and associations with MeSH and EMTREE terms (or lack thereof). Experimental study. UNESP, Brazil. We selected and analyzed 37 search strategies that had specifically been developed for the field of anesthesiology. These search strategies were adapted in order to cover all potentially relevant search terms, with regard to variant spellings and direct and indirect orders, in the most efficient manner. When the strategies included variant spellings and direct and indirect orders, these adapted versions of the search strategies selected retrieved the same number of search results in MEDLINE (mean of 61.3%) and a higher number in EMBASE (mean of 63.9%) in the sample analyzed. The numbers of results retrieved through the searches analyzed here were not identical with and without associated use of MeSH and EMTREE terms. However, association of these terms from both controlled vocabularies retrieved a larger number of records than did the use of either one of them. In view of these results, we recommend that the search terms used should include both preferred and non-preferred terms (i.e. variant spellings and direct/indirect order of the same term) and associated MeSH and EMTREE terms, in order to develop highly-sensitive search strategies for systematic reviews.
Castillo M, J.A. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2003-07-01
The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)
Shlesinger, Michael F
2009-01-01
There are a wide variety of searching problems from molecules seeking receptor sites to predators seeking prey. The optimal search strategy can depend on constraints on time, energy, supplies or other variables. We discuss a number of cases and especially remark on the usefulness of Levy walk search patterns when the targets of the search are scarce.
Akanksha Mishra
2017-05-01
Full Text Available In a deregulated electricity market it may at times become difficult to dispatch all the required power that is scheduled to flow due to congestion in transmission lines. An Interline Power Flow Controller (IPFC can be used to reduce the system loss and power flow in the heavily loaded line, improve stability and loadability of the system. This paper proposes a Disparity Line Utilization Factor for the optimal placement and Gravitational Search algorithm based optimal tuning of IPFC to control the congestion in transmission lines. DLUF ranks the transmission lines in terms of relative line congestion. The IPFC is accordingly placed in the most congested and the least congested line connected to the same bus. Optimal sizing of IPFC is carried using Gravitational Search algorithm. A multi-objective function has been chosen for tuning the parameters of the IPFC. The proposed method is implemented on an IEEE-30 bus test system. Graphical representations have been included in the paper showing reduction in LUF of the transmission lines after the placement of an IPFC. A reduction in active power and reactive power loss of the system by about 6% is observed after an optimally tuned IPFC has been included in the power system. The effectiveness of the proposed tuning method has also been shown in the paper through the reduction in the values of the objective functions.
The Method of Optimization of Hydropower Plant Performance for Use in Group Active Power Controller
Glazyrin G.V.
2017-04-01
Full Text Available The problem of optimization of hydropower plant performance is considered in this paper. A new method of calculation of optimal load-sharing is proposed. The method is based on application of incremental water flow curves representing relationship between the per unit increase of water flow and active power. The optimal load-sharing is obtained by solving the nonlinear equation governing the balance of total active power and the station power set point with the same specific increase of water flow for all turbines. Unlike traditional optimization techniques, the solution of the equation is obtained without taking into account unit safe operating zones. Instead, if calculated active power of a unit violates the permissible power range, load-sharing is recalculated for the remaining generating units. Thus, optimal load-sharing algorithm suitable for digital control systems is developed. The proposed algorithm is implemented in group active power controller in Novosibirsk hydropower plant. An analysis of operation of group active power controller proves that the application of the proposed method allows obtaining optimal load-sharing at each control step with sufficient precision.
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this
Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.
2017-09-01
Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.
Milligan, M. R.; Factor, T.
2001-01-01
This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute
Baranowski, Z.; Canali, L.; Toebbicke, R.; Hrivnac, J.; Barberis, D.
2017-10-01
This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions of event records, each of which consists of ∼100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. We report also on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.
Baranowski, Zbigniew; The ATLAS collaboration
2016-01-01
This paper reports on the activities aimed at improving the architecture and performance of the ATLAS EventIndex implementation in Hadoop. The EventIndex contains tens of billions event records, each of which consisting of ~100 bytes, all having the same probability to be searched or counted. Data formats represent one important area for optimizing the performance and storage footprint of applications based on Hadoop. This work reports on the production usage and on tests using several data formats including Map Files, Apache Parquet, Avro, and various compression algorithms. The query engine plays also a critical role in the architecture. This paper reports on the use of HBase for the EventIndex, focussing on the optimizations performed in production and on the scalability tests. Additional engines that have been tested include Cloudera Impala, in particular for its SQL interface, and the optimizations for data warehouse workloads and reports.
Milligan, M. R., National Renewable Energy Laboratory; Factor, T., Iowa Wind Energy Institute
2001-09-21
This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute.
2013-07-17
..., Bisexual and Transgender (LGBT) People AGENCY: Office of Policy Development and Research, HUD. ACTION... Search Process for Lesbian, Gay, Bisexual and Transgender (LGBT) People. Description of the Need for... housing discrimination. As part of that research, the Department would like to learn more about the...
2011-02-02
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of Affirmative Determination Regarding Application for Reconsideration By application dated November 29, 2010, a worker and a state workforce official...
Optimization of Wind Farm Layout: A Refinement Method by Random Search
Feng, Ju; Shen, Wen Zhong
2013-01-01
Wind farm layout optimization is to find the optimal positions of wind turbines inside a wind farm, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. Most of the works in the literature divide the wind farm into cells in which turbi...
Feng, Ju; Shen, Wen Zhong; Xu, Chang
2016-01-01
A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximi...
Freeman, J.; Junk, T.; Kirby, M.; Oksuzian, Y.; Phillips, T. J.; Snider, F. D.; Trovato, M.; Vizan, J.; Yao, W. M.
2013-01-01
We present the development and validation of the Higgs Optimized b Identification Tagger (HOBIT), a multivariate b-jet identification algorithm optimized for Higgs boson searches at the CDF experiment at the Fermilab Tevatron. At collider experiments, b taggers allow one to distinguish particle jets containing B hadrons from other jets; these algorithms have been used for many years with great success at CDF. HOBIT has been designed specifically for use in searches for light Higgs bosons decaying via H ! b\\bar{b}. This fact combined with the extent to which HOBIT synthesizes and extends the best ideas of previous taggers makes HOBIT unique among CDF b-tagging algorithms. Employing feed-forward neural network architectures, HOBIT provides an output value ranging from approximately -1 ("light-jet like") to 1 ("b-jet like"); this continuous output value has been tuned to provide maximum sensitivity in light Higgs boson search analyses. When tuned to the equivalent light jet rejection rate, HOBIT tags 54% of b jets in simulated 120 GeV/c2 Higgs boson events compared to 39% for SecVtx, the most commonly used b tagger at CDF. We present features of the tagger as well as its characterization in the form of b-jet finding efficiencies and false (light-jet) tag rates.
Lin Chaung; Lin, Tung-Hsien
2012-01-01
Highlights: ► The automatic procedure was developed to design the radial enrichment and gadolinia (Gd) distribution of fuel lattice. ► The method is based on a particle swarm optimization algorithm and local search. ► The design goal were to achieve the minimum local peaking factor. ► The number of fuel pins with Gd and Gd concentration are fixed to reduce search complexity. ► In this study, three axial sections are design and lattice performance is calculated using CASMO-4. - Abstract: The axial section of fuel assembly in a boiling water reactor (BWR) consists of five or six different distributions; this requires a radial lattice design. In this study, an automatic procedure based on a particle swarm optimization (PSO) algorithm and local search was developed to design the radial enrichment and gadolinia (Gd) distribution of the fuel lattice. The design goals were to achieve the minimum local peaking factor (LPF), and to come as close as possible to the specified target average enrichment and target infinite multiplication factor (k ∞ ), in which the number of fuel pins with Gd and Gd concentration are fixed. In this study, three axial sections are designed, and lattice performance is calculated using CASMO-4. Finally, the neutron cross section library of the designed lattice is established by CMSLINK; the core status during depletion, such as thermal limits, cold shutdown margin and cycle length, are then calculated using SIMULATE-3 in order to confirm that the lattice design satisfies the design requirements.
Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization
Wang Chun-Feng
2014-01-01
Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.
Martin M Gossner
Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when
Elad Segev
Full Text Available Finding optimal markers for microorganisms important in the medical, agricultural, environmental or ecological fields is of great importance. Thousands of complete microbial genomes now available allow us, for the first time, to exhaustively identify marker proteins for groups of microbial organisms. In this work, we model the biological task as the well-known mathematical "hitting set" problem, solving it based on both greedy and randomized approximation algorithms. We identify unique markers for 17 phenotypic and taxonomic microbial groups, including proteins related to the nitrite reductase enzyme as markers for the non-anammox nitrifying bacteria group, and two transcription regulation proteins, nusG and yhiF, as markers for the Archaea and Escherichia/Shigella taxonomic groups, respectively. Additionally, we identify marker proteins for three subtypes of pathogenic E. coli, which previously had no known optimal markers. Practically, depending on the completeness of the database this algorithm can be used for identification of marker genes for any microbial group, these marker genes may be prime candidates for the understanding of the genetic basis of the group's phenotype or to help discover novel functions which are uniquely shared among a group of microbes. We show that our method is both theoretically and practically efficient, while establishing an upper bound on its time complexity and approximation ratio; thus, it promises to remain efficient and permit the identification of marker proteins that are specific to phenotypic or taxonomic groups, even as more and more bacterial genomes are being sequenced.
Sweifach, Jay Stephen
2015-01-01
This article presents the results of a content analysis of MSW group work course syllabi in an effort to better understand the extent to which mutual aid and group conflict, two important dimensions of social group work, are included and featured as prominent elements in MSW-level group work instruction.
Nowcasting Unemployment Rates with Google Searches: Evidence from the Visegrad Group Countries
Pavlíček, J.; Krištoufek, Ladislav
2015-01-01
Roč. 10, č. 5 (2015), č. článku e0127084. E-ISSN 1932-6203 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Online searches * Google Trends * Unemployment Subject RIV: AH - Economics Impact factor: 3.057, year: 2015 http://library.utia.cas.cz/separaty/2015/E/kristoufek-0452321.pdf
Nicolas Rothen
Full Text Available BACKGROUND: Some studies, most of them case-reports, suggest that synesthetes have an advantage in visual search and episodic memory tasks. The goal of this study was to examine this hypothesis in a group study. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we tested thirteen grapheme-color synesthetes and we compared their performance on a visual search task and a memory test to an age-, handedness-, education-, and gender-matched control group. The results showed no significant group differences (all relevant ps>.50. For the visual search task effect sizes indicated a small advantage for synesthetes (Cohen's d between .19 and .32. No such advantage was found for episodic memory (Cohen's d<.05. CONCLUSIONS/SIGNIFICANCE: The results indicate that synesthesia per se does not seem to lead to a strong performance advantage. Rather, the superior performance of synesthetes observed in some case-report studies may be due to individual differences, to a selection bias or to a strategic use of synesthesia as a mnemonic. In order to establish universal effects of synesthesia on cognition single-case studies must be complemented by group studies.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Cant sawing log positioning in a sawmill: searching for an optimal ...
This algorithm was compared to the population-based incremental learning algorithm, the simulated annealing algorithm, and the particle swarm optimisation algorithm. The tentacle algorithm performed the best of all the algorithms evaluated in terms of the mean volume recovery obtained. However, exhaustive searches ...
Scaling Up Optimal Heuristic Search in Dec-POMDPs via Incremental Expansion (extended abstract)
Spaan, M.T.J.; Oliehoek, F.A.; Amato, C.
2011-01-01
We advance the state of the art in optimal solving of decentralized partially observable Markov decision processes (Dec-POMDPs), which provide a formal model for multiagent planning under uncertainty.
Yefimenko A. A.
2014-12-01
Full Text Available The authors present a method, an algorithm and a program, designed to determine the optimal size of printed circuit boards (PCB of mechanical structures and different kinds of electronic equipment. The PCB filling factor is taken as an optimization criterion. The method allows one to quickly determine the dependence of the filling factor on the size of the PCB for various components.
Li, X H; Ji, J; Qian, S Y
2018-01-02
Objective: To analyze the resting energy expenditure and optimal energy supply in different age groups of critically ill children on mechanical ventilation in pediatric intensive care unit (PICU). Methods: Patients on mechanical ventilation hospitalized in PICU of Beijing Children's Hospital from March 2015 to March 2016 were enrolled prospectively. Resting energy expenditure of patients was calculated by US Med Graphic company critical care management (CCM) energy metabolism test system after mechanical ventilation. Patients were divided into three groups:10 years. The relationship between the measured and predictive resting energy expenditure was analyzed with correlation analysis; while the metabolism status and the optimal energy supply in different age groups were analyzed with chi square test and variance analysis. Results: A total of 102 patients were enrolled, the measured resting energy expenditure all correlated with predictive resting energy expenditure in different age groups (10 years ( r= 0.5, P= 0.0) ) . A total of 40 cases in group, including: 14 cases of low metabolism (35%), 14 cases of normal metabolism (35%), and 12 cases of high metabolism (30%); 45 cases in 3-10 years group, including: 22 cases of low metabolism (49%), 19 cases of normal metabolism (42%), 4 cases of high metabolism (9%); 17 cases in > 10 years group, including: 12 cases of low metabolism (71%), 4 cases of normal metabolism (23%), 1 case of high metabolism (6%). Metabolism status showed significant differences between different age groups ( χ (2)=11.30, P age groups ( F= 46.57, Pgroup, (184±53) kJ/ (kg⋅d) in 3-10 years group, and (120±30) kJ/ (kg⋅d) in > 10 years group. Conclusion: The resting energy metabolism of the critically ill children on mechanical ventilation is negatively related to the age. The actual energy requirement should be calculated according to different ages.
Sabita Chaine
2015-05-01
Full Text Available This work presents a methodology adopted in order to tune the controller parameters of superconducting magnetic energy storage (SMES system in the automatic generation control (AGC of a two-area thermal power system. The gains of integral controllers of AGC loop, proportional controller of SMES loop and gains of the current feedback loop of the inductor in SMES are optimized simultaneously in order to achieve a desired performance. Recently proposed intelligent technique based algorithm known as Cuckoo search algorithm (CSA is applied for optimization. Sensitivity and robustness of the tuned gains tested at different operating conditions prove the effectiveness of fast acting energy storage devices like SMES in damping out oscillations in power system when their controllers are properly tuned.
Rafael de Carvalho Miranda
2014-01-01
Full Text Available The development of discrete-event simulation software was one of the most successful interfaces in operational research with computation. As a result, research has been focused on the development of new methods and algorithms with the purpose of increasing simulation optimization efficiency and reliability. This study aims to define optimum variation intervals for each decision variable through a proposed approach which combines the data envelopment analysis with the Fuzzy logic (Fuzzy-DEA-BCC, seeking to improve the decision-making units’ distinction in the face of uncertainty. In this study, Taguchi’s orthogonal arrays were used to generate the necessary quantity of DMUs, and the output variables were generated by the simulation. Two study objects were utilized as examples of mono- and multiobjective problems. Results confirmed the reliability and applicability of the proposed method, as it enabled a significant reduction in search space and computational demand when compared to conventional simulation optimization techniques.
Tinggui Chen
2014-01-01
Full Text Available Artificial bee colony (ABC algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA, artificial colony optimization (ACO, and particle swarm optimization (PSO. However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.
Behavioral responses in structured populations pave the way to group optimality.
Akçay, Erol; Van Cleve, Jeremy
2012-02-01
An unresolved controversy regarding social behaviors is exemplified when natural selection might lead to behaviors that maximize fitness at the social-group level but are costly at the individual level. Except for the special case of groups of clones, we do not have a general understanding of how and when group-optimal behaviors evolve, especially when the behaviors in question are flexible. To address this question, we develop a general model that integrates behavioral plasticity in social interactions with the action of natural selection in structured populations. We find that group-optimal behaviors can evolve, even without clonal groups, if individuals exhibit appropriate behavioral responses to each other's actions. The evolution of such behavioral responses, in turn, is predicated on the nature of the proximate behavioral mechanisms. We model a particular class of proximate mechanisms, prosocial preferences, and find that such preferences evolve to sustain maximum group benefit under certain levels of relatedness and certain ecological conditions. Thus, our model demonstrates the fundamental interplay between behavioral responses and relatedness in determining the course of social evolution. We also highlight the crucial role of proximate mechanisms such as prosocial preferences in the evolution of behavioral responses and in facilitating evolutionary transitions in individuality.
An optimized ultra-fine energy group structure for neutron transport calculations
Huria, Harish; Ouisloumen, Mohamed
2008-01-01
This paper describes an optimized energy group structure that was developed for neutron transport calculations in lattices using the Westinghouse lattice physics code PARAGON. The currently used 70-energy group structure results in significant discrepancies when the predictions are compared with those from the continuous energy Monte Carlo methods. The main source of the differences is the approximations employed in the resonance self-shielding methodology. This, in turn, leads to ambiguous adjustments in the resonance range cross-sections. The main goal of developing this group structure was to bypass the self-shielding methodology altogether thereby reducing the neutronic calculation errors. The proposed optimized energy mesh has 6064 points with 5877 points spanning the resonance range. The group boundaries in the resonance range were selected so that the micro group cross-sections matched reasonably well with those derived from reaction tallies of MCNP for a number of resonance absorbers of interest in reactor lattices. At the same time, however, the fast and thermal energy range boundaries were also adjusted to match the MCNP reaction rates in the relevant ranges. The resulting multi-group library was used to obtain eigenvalues for a wide variety of reactor lattice numerical benchmarks and also the Doppler reactivity defect benchmarks to establish its adequacy. (authors)
Xiaofeng Lv
2018-01-01
Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.
Thair M. Al-Taiee
2013-05-01
Full Text Available To obtain optimal operating rules for storage reservoirs, large numbers of simulation and optimization models have been developed over the past several decades, which vary significantly in their mechanisms and applications. Rule curves are guidelines for long term reservoir operation. An efficient technique is required to find the optimal rule curves that can mitigate water shortage in long term operation. The investigation of developed Genetic Algorithm (GA technique, which is an optimization approach base on the mechanics of natural selection, derived from the theory of natural evolution, was carried out to through the application to predict the daily rule curve of Mosul regulating reservoir in Iraq. Record daily inflows, outflow, water level in the reservoir for 19 year (1986-1990 and (1994-2007 were used in the developed model for assessing the optimal reservoir operation. The objective function is set to minimize the annual sum of squared deviation from the desired downstream release and desired storage volume in the reservoir. The decision variables are releases, storage volume, water level and outlet (demand from the reservoir. The results of the GA model gave a good agreement during the comparison with the actual rule curve and the designed rating curve of the reservoir. The simulated result shows that GA-derived policies are promising and competitive and can be effectively used for daily reservoir operation in addition to the rational monthly operation and predicting also rating curve of reservoirs.
Mahdad, Belkacem; Srairi, K.
2015-01-01
Highlights: • A generalized optimal security power system planning strategy for blackout risk prevention is proposed. • A Grey Wolf Optimizer dynamically coordinated with Pattern Search algorithm is proposed. • A useful optimized database dynamically generated considering margin loading stability under severe faults. • The robustness and feasibility of the proposed strategy is validated in the standard IEEE 30 Bus system. • The proposed planning strategy will be useful for power system protection coordination and control. - Abstract: Developing a flexible and reliable power system planning strategy under critical situations is of great importance to experts and industrials to minimize the probability of blackouts occurrence. This paper introduces the first stage of this practical strategy by the application of Grey Wolf Optimizer coordinated with pattern search algorithm for solving the security smart grid power system management under critical situations. The main objective of this proposed planning strategy is to prevent the practical power system against blackout due to the apparition of faults in generating units or important transmission lines. At the first stage the system is pushed to its margin stability limit, the critical loads shedding are selected using voltage stability index. In the second stage the generator control variables, the reactive power of shunt and dynamic compensators are adjusted in coordination with minimization the active and reactive power at critical loads to maintain the system at security state to ensure service continuity. The feasibility and efficiency of the proposed strategy is applied to IEEE 30-Bus test system. Results are promising and prove the practical efficiency of the proposed strategy to ensure system security under critical situations
Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias
2013-06-26
There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, Presources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open
Lange, Johannes
2014-01-01
The purpose of this thesis is to improve the photon selection of the CMS SinglePhoton search for Supersymmetry by using multivariate analyses.The Single-Photon search aims to ﬁnd Supersymmetry (SUSY) in data taken by theCompact Muon Solenoid (CMS) detector at the Large Hadron Collider located atthe research center CERN. SUSY is an extension of the standard model of particlephysics. The search is designed for a general gauge mediation scenario, which describes the gauge mediated SUSY breaking. The analysis uses ﬁnal states with jets,at least one photon and missing transverse energy. A data-driven prediction of themultijet background is performed for the analysis. For this purpose, photon candidates have to be classiﬁed into two selections.In this thesis the usage of multivariate analyses for the photon candidate classiﬁcation is studied. The methods used are Fisher Discriminant, Boosted Decision Treesand Artiﬁcial Neural Networks. Their performance is evaluated with respect to different aspects impor...
Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.
2010-01-01
Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.
Harris, Adam J L; de Molière, Laura; Soh, Melinda; Hahn, Ulrike
2017-01-01
One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events-they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a 'severity effect' whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method.
GROUP-BUYING ONLINE AUCTION AND OPTIMAL INVENTORY POLICY IN UNCERTAIN MARKET
Jian CHEN; Yunhui LIU; Xiping SONG
2004-01-01
In this paper we consider a group-buying online auction (GBA) model for a monopolistic manufacturer selling novel products in the uncertain market. Firstly, we introduce the bidder's dominant strategy, after which we optimize the GBA price curve and the production volume together.Finally, we compare the GBA with the traditional posted pricing mechanism and find that the GBA is highly probable to be advantageous over the posted pricing mechanism in some appropriate market environments.
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
Akemi Gálvez
2014-01-01
for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Optimizing the exercise prescription for depression: The search for biomarkers of response
Medina, J.L.; Jacquart, J.; Smits, J.A.J.
2015-01-01
There is growing support for the efficacy of exercise interventions for the treatment of individuals who present with mild-to-moderate depression. The variability in treatment response across studies and individuals suggests that the efficacy of exercise for depression will be most optimal when
CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization
Borges, Pedro Manuel F. C.
2000-01-01
This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...
Chen, C.-L.
2005-01-01
With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market
Chun Lung Chen
2005-01-01
With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market. (author)
Korovin, Iakov S.; Tkachenko, Maxim G.
2018-03-01
In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.
DAHIYA, P.
2015-05-01
Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.
Karaba Adam
2016-01-01
Full Text Available Steam-cracking is energetically intensive large-scaled process which transforms a wide range of hydrocarbons feedstock to petrochemical products. The dependence of products yields on feedstock composition and reaction conditions has been successfully described by mathematical models which are very useful tools for the optimization of cracker operation. Remaining problem is to formulate objective function for such an optimization. Quantitative criterion based on the process economy is proposed in this paper. Previously developed and verified industrial steam-cracking semi-mechanistic model is utilized as supporting tool for economic evaluation of selected gasoline feedstock. Economic criterion is established as the difference between value of products obtained by cracking of studied feedstock under given conditions and the value of products obtained by cracking of reference feedstock under reference conditions. As an example of method utilization, optimal reaction conditions were searched for each of selected feedstock. Potential benefit of individual cracking and cracking of grouped feedstocks in the contrast to cracking under the middle of optimums is evaluated and also compared to cracking under usual conditions.
Maskell Douglas L
2009-05-01
Full Text Available Abstract Background The Smith-Waterman algorithm is one of the most widely used tools for searching biological sequence databases due to its high sensitivity. Unfortunately, the Smith-Waterman algorithm is computationally demanding, which is further compounded by the exponential growth of sequence databases. The recent emergence of many-core architectures, and their associated programming interfaces, provides an opportunity to accelerate sequence database searches using commonly available and inexpensive hardware. Findings Our CUDASW++ implementation (benchmarked on a single-GPU NVIDIA GeForce GTX 280 graphics card and a dual-GPU GeForce GTX 295 graphics card provides a significant performance improvement compared to other publicly available implementations, such as SWPS3, CBESW, SW-CUDA, and NCBI-BLAST. CUDASW++ supports query sequences of length up to 59K and for query sequences ranging in length from 144 to 5,478 in Swiss-Prot release 56.6, the single-GPU version achieves an average performance of 9.509 GCUPS with a lowest performance of 9.039 GCUPS and a highest performance of 9.660 GCUPS, and the dual-GPU version achieves an average performance of 14.484 GCUPS with a lowest performance of 10.660 GCUPS and a highest performance of 16.087 GCUPS. Conclusion CUDASW++ is publicly available open-source software. It provides a significant performance improvement for Smith-Waterman-based protein sequence database searches by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs.
Muhammad Sulaiman
2018-01-01
Full Text Available This paper presents the solution of directional overcurrent relay (DOCR problems using Simulated Annealing based Symbiotic Organism Search (SASOS. The objective function of the problem is to minimize the sum of the operating times of all primary relays. The DOCR problem is nonlinear and highly constrained with two types of decision variables, namely, the time dial settings (TDS and plug setting (PS. In this paper, three models of the problem are considered, the IEEE 3-bus, 4-bus, and 6-bus, respectively. We have applied SASOS to solve the problem and the obtained results are compared with other algorithms available in the literature.
Radha, J.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
A group arrival feedback retrial queue with k optional stages of service and orbital search policy is studied. Any arriving group of customer finds the server free, one from the group enters into the first stage of service and the rest of the group join into the orbit. After completion of the i th stage of service, the customer under service may have the option to choose (i+1)th stage of service with θi probability, with pI probability may join into orbit as feedback customer or may leave the system with {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\} probability. Busy server may get to breakdown due to the arrival of negative customers and the service channel will fail for a short interval of time. At the completion of service or repair, the server searches for the customer in the orbit (if any) with probability α or remains idle with probability 1-α. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.
Energy aware swarm optimization with intercluster search for wireless sensor network.
Thilagavathi, Shanmugasundaram; Geetha, Bhavani Gnanasambandan
2015-01-01
Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.
Energy Aware Swarm Optimization with Intercluster Search for Wireless Sensor Network
Shanmugasundaram Thilagavathi
2015-01-01
Full Text Available Wireless sensor networks (WSNs are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO algorithm with modified connected dominating set (CDS based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH. Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.
Optimal path planning for a mobile robot using cuckoo search algorithm
Mohanty, Prases K.; Parhi, Dayal R.
2016-03-01
The shortest/optimal path planning is essential for efficient operation of autonomous vehicles. In this article, a new nature-inspired meta-heuristic algorithm has been applied for mobile robot path planning in an unknown or partially known environment populated by a variety of static obstacles. This meta-heuristic algorithm is based on the levy flight behaviour and brood parasitic behaviour of cuckoos. A new objective function has been formulated between the robots and the target and obstacles, which satisfied the conditions of obstacle avoidance and target-seeking behaviour of robots present in the terrain. Depending upon the objective function value of each nest (cuckoo) in the swarm, the robot avoids obstacles and proceeds towards the target. The smooth optimal trajectory is framed with this algorithm when the robot reaches its goal. Some simulation and experimental results are presented at the end of the paper to show the effectiveness of the proposed navigational controller.
Searching for optimal integer solutions to set partitioning problems using column generation
Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael
2007-01-01
We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...
Cécile Bordier
2017-08-01
Full Text Available Neuroimaging data can be represented as networks of nodes and edges that capture the topological organization of the brain connectivity. Graph theory provides a general and powerful framework to study these networks and their structure at various scales. By way of example, community detection methods have been widely applied to investigate the modular structure of many natural networks, including brain functional connectivity networks. Sparsification procedures are often applied to remove the weakest edges, which are the most affected by experimental noise, and to reduce the density of the graph, thus making it theoretically and computationally more tractable. However, weak links may also contain significant structural information, and procedures to identify the optimal tradeoff are the subject of active research. Here, we explore the use of percolation analysis, a method grounded in statistical physics, to identify the optimal sparsification threshold for community detection in brain connectivity networks. By using synthetic networks endowed with a ground-truth modular structure and realistic topological features typical of human brain functional connectivity networks, we show that percolation analysis can be applied to identify the optimal sparsification threshold that maximizes information on the networks' community structure. We validate this approach using three different community detection methods widely applied to the analysis of brain connectivity networks: Newman's modularity, InfoMap and Asymptotical Surprise. Importantly, we test the effects of noise and data variability, which are critical factors to determine the optimal threshold. This data-driven method should prove particularly useful in the analysis of the community structure of brain networks in populations characterized by different connectivity strengths, such as patients and controls.
Searching for optimal mitigation geometries for laser resistant multilayer high reflector coatings
Qiu, S R; Wolfe, J E; Monterrosa, A M; Feit, M D; Pistor, T V; STolz, C J
2011-02-11
Growing laser damage sites on multilayer high reflector coatings can limit mirror performance. One of the strategies to improve laser damage resistance is to replace the growing damage sites with pre-designed benign mitigation structures. By mitigating the weakest site on the optic, the large aperture mirror will have a laser resistance comparable to the intrinsic value of the multilayer coating. To determine the optimal mitigation geometry, the finite difference time domain method (FDTD) was used to quantify the electric-field intensification within the multilayer, at the presence of different conical pits. We find that the field intensification induced by the mitigation pit is strongly dependent on the polarization and the angle of incidence (AOI) of the incoming wave. Therefore the optimal mitigation conical pit geometry is application specific. Furthermore, our simulation also illustrates an alternative means to achieve an optimal mitigation structure by matching the cone angle of the structure with the AOI of the incoming wave, except for the p-polarization wave at a range of incident angles between 30{sup o} and 45{sup o}.
Fei Wang
2017-11-01
Full Text Available The optimal dispatching model for a stand-alone microgrid (MG is of great importance to its operation reliability and economy. This paper aims at addressing the difficulties in improving the operational economy and maintaining the power balance under uncertain load demand and renewable generation, which could be even worse in such abnormal conditions as storms or abnormally low or high temperatures. A new two-time scale multi-objective optimization model, including day-ahead cursory scheduling and real-time scheduling for finer adjustments, is proposed to optimize the operational cost, load shedding compensation and environmental benefit of stand-alone MG through controllable load (CL and multi-distributed generations (DGs. The main novelty of the proposed model is that the synergetic response of CL and energy storage system (ESS in real-time scheduling offset the operation uncertainty quickly. And the improved dispatch strategy for combined cooling-heating-power (CCHP enhanced the system economy while the comfort is guaranteed. An improved algorithm, Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy (SIP-CO-PSO-ERS algorithm with strong searching capability and fast convergence speed, was presented to deal with the problem brought by the increased errors between actual renewable generation and load and prior predictions. Four typical scenarios are designed according to the combinations of day types (work day or weekend and weather categories (sunny or rainy to verify the performance of the presented dispatch strategy. The simulation results show that the proposed two-time scale model and SIP-CO-PSO-ERS algorithm exhibit better performance in adaptability, convergence speed and search ability than conventional methods for the stand-alone MG’s operation.
Double-Group Particle Swarm Optimization and Its Application in Remote Sensing Image Segmentation.
Shen, Liang; Huang, Xiaotao; Fan, Chongyi
2018-05-01
Particle Swarm Optimization (PSO) is a well-known meta-heuristic. It has been widely used in both research and engineering fields. However, the original PSO generally suffers from premature convergence, especially in multimodal problems. In this paper, we propose a double-group PSO (DG-PSO) algorithm to improve the performance. DG-PSO uses a double-group based evolution framework. The individuals are divided into two groups: an advantaged group and a disadvantaged group. The advantaged group works according to the original PSO, while two new strategies are developed for the disadvantaged group. The proposed algorithm is firstly evaluated by comparing it with the other five popular PSO variants and two state-of-the-art meta-heuristics on various benchmark functions. The results demonstrate that DG-PSO shows a remarkable performance in terms of accuracy and stability. Then, we apply DG-PSO to multilevel thresholding for remote sensing image segmentation. The results show that the proposed algorithm outperforms five other popular algorithms in meta-heuristic-based multilevel thresholding, which verifies the effectiveness of the proposed algorithm.
Fast three-dimensional core optimization based on modified one-group model
Freire, Fernando S. [ELETROBRAS Termonuclear S.A. - ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Dept. GCN-T], e-mail: freire@eletronuclear.gov.br; Martinez, Aquilino S.; Silva, Fernando C. da [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: aquilino@con.ufrj.br, e-mail: fernando@con.ufrj.br
2009-07-01
The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)
Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group
Lianbo Deng
2014-01-01
Full Text Available This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using a simulated annealing algorithm. In the algorithm, the supplier of each subsidiary was selected according to the evaluation result for independent procurement. Finally, the effect of different parameters on the coordinated procurement strategy was analysed. The results showed that the coordinated strategy can clearly save procurement costs; that the strategy appears to be more cooperative when the quality requirement is not stricter; and that the coordinated costs have a strong effect on the coordinated procurement strategy.
Fast three-dimensional core optimization based on modified one-group model
Freire, Fernando S.; Martinez, Aquilino S.; Silva, Fernando C. da
2009-01-01
The optimization of any nuclear reactor core is an extremely complex process that consumes a large amount of computer time. Fortunately, the nuclear designer can rely on a variety of methodologies able to approximate the analysis of each available core loading pattern. Two-dimensional codes are usually used to analyze the loading scheme. However, when particular axial effects are present in the core, two-dimensional analysis cannot produce good results and three-dimensional analysis can be required at all time. Basically, in this paper are presented the major advantages that can be found when one use the modified one-group diffusion theory coupled with a buckling correction model in optimization process. The results of the proposed model are very accurate when compared to benchmark results obtained from detailed calculations using three-dimensional nodal codes (author)
In Search of Theory: The Study of "Ethnic Groups" in Developmental Psychology.
Gjerde, Per F.; Onishi, Miyoko
2000-01-01
Discusses the conceptual status and uses of ethnic groups in developmental psychology. Discusses problems with the primordialist position and the influence of nationalism in defining culture. Argues that culture and ethnicity as shared and located within a bounded population is an increasingly outmoded notion. Maintains that developmental…
Branker, Anthony Daniel John
2010-01-01
What would happen if college students involved in jazz small group performance were given the opportunity to be musically independent and self-directed while working in their own collaborative space? What sorts of things would they experience? What kind of learning space would they create for themselves? The purpose of this study was to…
Gonggui Chen
2017-01-01
Full Text Available The optimal power flow (OPF is well-known as a significant optimization tool for the security and economic operation of power system, and OPF problem is a complex nonlinear, nondifferentiable programming problem. Thus this paper proposes a Gbest-guided cuckoo search algorithm with the feedback control strategy and constraint domination rule which is named as FCGCS algorithm for solving OPF problem and getting optimal solution. This FCGCS algorithm is guided by the global best solution for strengthening exploitation ability. Feedback control strategy is devised to dynamically regulate the control parameters according to actual and specific feedback value in the simulation process. And the constraint domination rule can efficiently handle inequality constraints on state variables, which is superior to traditional penalty function method. The performance of FCGCS algorithm is tested and validated on the IEEE 30-bus and IEEE 57-bus example systems, and simulation results are compared with different methods obtained from other literatures recently. The comparison results indicate that FCGCS algorithm can provide high-quality feasible solutions for different OPF problems.
Alderdice, Fiona; Gargan, Phyl; McCall, Emma; Franck, Linda
2018-01-30
Online resources are a source of information for parents of premature babies when their baby is discharged from hospital. To explore what topics parents deemed important after returning home from hospital with their premature baby and to evaluate the quality of existing websites that provide information for parents post-discharge. In stage 1, 23 parents living in Northern Ireland participated in three focus groups and shared their information and support needs following the discharge of their infant(s). In stage 2, a World Wide Web (WWW) search was conducted using Google, Yahoo and Bing search engines. Websites meeting pre-specified inclusion criteria were reviewed using two website assessment tools and by calculating a readability score. Website content was compared to the topics identified by parents in the focus groups. Five overarching topics were identified across the three focus groups: life at home after neonatal care, taking care of our family, taking care of our premature baby, baby's growth and development and help with getting support and advice. Twenty-nine sites were identified that met the systematic web search inclusion criteria. Fifteen (52%) covered all five topics identified by parents to some extent and 9 (31%) provided current, accurate and relevant information based on the assessment criteria. Parents reported the need for information and support post-discharge from hospital. This was not always available to them, and relevant online resources were of varying quality. Listening to parents needs and preferences can facilitate the development of high-quality, evidence-based, parent-centred resources. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.
The Search for Biosignatures on Mars: Using Predictive Geology to Optimize Exploration Targets
Oehler, Dorothy Z.; Allen, Carlton C.
2011-01-01
Predicting geologic context from satellite data is a method used on Earth for exploration in areas with limited ground truth. The method can be used to predict facies likely to contain organic-rich shales. Such shales concentrate and preserve organics and are major repositories of organic biosignatures on Earth [1]. Since current surface conditions on Mars are unfavorable for development of abundant life or for preservation of organic remains of past life, the chances are low of encountering organics in surface samples. Thus, focusing martian exploration on sites predicted to contain organic-rich shales would optimize the chances of discovering evidence of life, if it ever existed on that planet.
Optimal Alignment of Search and Rescue Response Posture with Historical Incident Occurrence
2014-04-01
and Development Canada Scientific Report DRDC-RDDC-2014-R12 April 2014 © Her Majesty the Queen in Right of Canada, as represented by the Minister of...makers and was used as input to the 2014 CJOC SAR Directive. This report also documents the optimization of RP30 schedules in the Victoria and Halifax...de SAR 2014 du COIC. Ce rapport décrit également l’optimisation des horaires liés à la posture d’intervention de 30 minutes dans les RRS de Victoria et
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Amaral, Larissa S; Azevedo, Eduardo B; Perussi, Janice R
2018-02-27
Antimicrobial Photodynamic Inactivation (a-PDI) is based on the oxidative destruction of biological molecules by reactive oxygen species generated by the photo-excitation of a photosensitive molecule. When the a-PDT is performed along with the use of mathematical models, the optimal conditions for maximum inactivation are easily found. Experimental designs allow a multivariate analysis of the experimental parameters. This is usually made using a univariate approach, which demands a large number of experiments, being time and money consuming. This paper presents the use of the response surface methodology for improving the search for the best conditions to reduce E. coli survival levels by a-PDT using methylene blue (MB) and toluidine blue (TB) as photosensitizers and white light. The goal was achieved by analyzing the effects and interactions of the three main parameters involved in the process: incubation time (IT), photosensitizer concentration (C PS ), and light dose (LD). The optimization procedure began with a full 2 3 factorial design, followed by a central composite one, in which the optimal conditions were estimated. For MB, C PS was the most important parameter followed by LD and IT whereas, for TB, the main parameter was LD followed by C PS and IT. Using the estimated optimal conditions for inactivation, MB was able to inactivate 99.999999% CFU mL -1 of E. coli with IT of 28 min, LD of 31 J cm -2 , and C PS of 32 μmol L -1 , while TB required 18 min, 39 J cm -2 , and 37 μmol L -1 . The feasibility of using the response surface methodology with a-PDT was demonstrated, enabling enhanced photoinactivation efficiency and fast results with a minimal number of experiments. Copyright © 2018. Published by Elsevier B.V.
Coelho, Leandro dos Santos; Mariani, Viviana Cocco
2009-01-01
Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm driven by the simulation of a social psychological metaphor instead of the survival of the fittest individual. Based on the chaotic systems theory, this paper proposed a novel chaotic PSO combined with an implicit filtering (IF) local search method to solve economic dispatch problems. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed PSO introduces chaos mapping using Henon map sequences which increases its convergence rate and resulting precision. The chaotic PSO approach is used to produce good potential solutions, and the IF is used to fine-tune of final solution of PSO. The hybrid methodology is validated for a test system consisting of 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. Simulation results are promising and show the effectiveness of the proposed approach.
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2015-12-01
The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.
Parallel approach on sorting of genes in search of optimal solution.
Kumar, Pranav; Sahoo, G
2018-05-01
An important tool for comparing genome analysis is the rearrangement event that can transform one given genome into other. For finding minimum sequence of fission and fusion, we have proposed here an algorithm and have shown a transformation example for converting the source genome into the target genome. The proposed algorithm comprises of circular sequence i.e. "cycle graph" in place of mapping. The main concept of algorithm is based on optimal result of permutation. These sorting processes are performed in constant running time by showing permutation in the form of cycle. In biological instances it has been observed that transposition occurs half of the frequency as that of reversal. In this paper we are not dealing with reversal instead commencing with the rearrangement of fission, fusion as well as transposition. Copyright © 2017 Elsevier Inc. All rights reserved.
Perrin, Maxine; Robillard, Manon; Roy-Charland, Annie
2017-12-01
This study examined eye movements during a visual search task as well as cognitive abilities within three age groups. The aim was to explore scanning patterns across symbol grids and to better understand the impact of symbol location in AAC displays on speed and accuracy of symbol selection. For the study, 60 students were asked to locate a series of symbols on 16 cell grids. The EyeLink 1000 was used to measure eye movements, accuracy, and response time. Accuracy was high across all cells. Participants had faster response times, longer fixations, and more frequent fixations on symbols located in the middle of the grid. Group comparisons revealed significant differences for accuracy and reaction times. The Leiter-R was used to evaluate cognitive abilities. Sustained attention and cognitive flexibility scores predicted the participants' reaction time and accuracy in symbol selection. Findings suggest that symbol location within AAC devices and individuals' cognitive abilities influence the speed and accuracy of retrieving symbols.
Nutan Saha
2017-06-01
Full Text Available This paper presents a control scheme for simultaneous control of the speed of Switched Reluctance Motor (SRM and minimizing the torque ripple employing Hybrid Many Optimizing Liaison Gravitational Search Algorithm (Hybrid MOLGSA technique. The control mechanism includes two controlling loops, the outer loop is governed for speed control and a current controller for the inner loop, intelligent selection of turn on and turn off angle for a 60 KW, 3-phase 6/8 SRM. It is noticed that the torque ripple coefficient, ISE of speed & current are reduced by 12.81%, 38.60%, 16.74% respectively by Hybrid MOLGSA algorithm compared to Gravitational Search Algorithm (GSA algorithm. It is also observed that the settling times for the controller using the parameter values for obtaining best values of torque ripple, Integral square error of speed and current are reduced by 51.25%, 58.04% and 59.375% by proposed Hybrid MOLGSA algorithm compared to the GSA algorithm.
Xuejun Chen
2015-01-01
Full Text Available The support vector regression (SVR and neural network (NN are both new tools from the artificial intelligence field, which have been successfully exploited to solve various problems especially for time series forecasting. However, traditional SVR and NN cannot accurately describe intricate time series with the characteristics of high volatility, nonstationarity, and nonlinearity, such as wind speed and electricity price time series. This study proposes an ensemble approach on the basis of 5-3 Hanning filter (5-3H and wavelet denoising (WD techniques, in conjunction with artificial intelligence optimization based SVR and NN model. So as to confirm the validity of the proposed model, two applicative case studies are conducted in terms of wind speed series from Gansu Province in China and electricity price from New South Wales in Australia. The computational results reveal that cuckoo search (CS outperforms both PSO and GA with respect to convergence and global searching capacity, and the proposed CS-based hybrid model is effective and feasible in generating more reliable and skillful forecasts.
Searching for the optimal stimulus eliciting auditory brainstem responses in humans
Fobel, Oliver; Dau, Torsten
2004-01-01
-chirp, was based on estimates of human basilar membrane (BM) group delays derived from stimulus-frequency otoacoustic emissions (SFOAE) at a sound pressure level of 40 dB [Shera and Guinan, in Recent Developments in Auditory Mechanics (2000)]. The other chirp, referred to as the A-chirp, was derived from latency...
Zumsteg, Zachary S; Chen, Zinan; Howard, Lauren E; Amling, Christopher L; Aronson, William J; Cooperberg, Matthew R; Kane, Christopher J; Terris, Martha K; Spratt, Daniel E; Sandler, Howard M; Freedland, Stephen J
2017-12-01
Prostate cancer is a heterogeneous disease, and risk stratification systems have been proposed to guide treatment decisions. However, significant heterogeneity remains for those with unfavorable-risk disease. This study included 3335 patients undergoing radical prostatectomy without adjuvant radiotherapy in the SEARCH database. High-risk patients were dichotomized into standard and very high-risk (VHR) groups based on primary Gleason pattern, percentage of positive biopsy cores (PPBC), number of NCCN high-risk factors, and stage T3b-T4 disease. Similarly, intermediate-risk prostate cancer was separated into favorable and unfavorable groups based on primary Gleason pattern, PPBC, and number of NCCN intermediate-risk factors. Median follow-up was 78 months. Patients with VHR prostate cancer had significantly worse PSA relapse-free survival (PSA-RFS, P < 0.001), distant metastasis (DM, P = 0.004), and prostate cancer-specific mortality (PCSM, P = 0.015) in comparison to standard high-risk (SHR) patients in multivariable analyses. By contrast, there was no significant difference in PSA-RFS, DM, or PCSM between SHR and unfavorable intermediate-risk (UIR) patients. Therefore, we propose a novel risk stratification system: Group 1 (low-risk), Group 2 (favorable intermediate-risk), Group 3 (UIR and SHR), and Group 4 (VHR). The c-index of this new grouping was 0.683 for PSA-RFS and 0.800 for metastases, compared to NCCN-risk groups which yield 0.666 for PSA-RFS and 0.764 for metastases. Patients classified as VHR have markedly increased rates of PSA relapse, DM, and PCSM in comparison to SHR patients, whereas UIR and SHR patients have similar prognosis. Novel therapeutic strategies are needed for patients with VHR, likely involving multimodality therapy. © 2017 Wiley Periodicals, Inc.
Colombo, Cinzia; Mosconi, Paola; Confalonieri, Paolo; Baroni, Isabella; Traversa, Silvia; Hill, Sophie J; Synnot, Anneliese J; Oprandi, Nadia; Filippini, Graziella
2014-07-24
Multiple sclerosis (MS) patients and their family members increasingly seek health information on the Internet. There has been little exploration of how MS patients integrate health information with their needs, preferences, and values for decision making. The INtegrating and Deriving Evidence, Experiences, and Preferences (IN-DEEP) project is a collaboration between Italian and Australian researchers and MS patients, aimed to make high-quality evidence accessible and meaningful to MS patients and families, developing a Web-based resource of evidence-based information starting from their information needs. The objective of this study was to analyze MS patients and their family members' experience about the Web-based health information, to evaluate how they asses this information, and how they integrate health information with personal values. We organized 6 focus groups, 3 with MS patients and 3 with family members, in the Northern, Central, and Southern parts of Italy (April-June 2011). They included 40 MS patients aged between 18 and 60, diagnosed as having MS at least 3 months earlier, and 20 family members aged 18 and over, being relatives of a person with at least a 3-months MS diagnosis. The focus groups were audio-recorded and transcribed verbatim (Atlas software, V 6.0). Data were analyzed from a conceptual point of view through a coding system. An online forum was hosted by the Italian MS society on its Web platform to widen the collection of information. Nine questions were posted covering searching behavior, use of Web-based information, truthfulness of Web information. At the end, posts were downloaded and transcribed. Information needs covered a comprehensive communication of diagnosis, prognosis, and adverse events of treatments, MS causes or risk factors, new drugs, practical, and lifestyle-related information. The Internet is considered useful by MS patients, however, at the beginning or in a later stage of the disease a refusal to actively search
Borojovich, Eitan J C; Münster, Meshulam; Rafailov, Gennady; Porat, Ze'ev
2010-07-01
Precipitation of struvite (MgNH4PO4) is a known process for purification of wastewater from high concentrations of ammonium. The optimal conditions for precipitation are basic pH (around 9) and sufficient concentrations of magnesium and phosphate ions. In this work, we accomplished efficient precipitation of ammonium from concentrated industrial waste stream by using magnesium oxide (MgO) both as a source of magnesium ions and as a base. Best results were obtained with technical-grade MgO, which provided 99% removal of ammonium. Moreover, ammonium removal occurred already at pH 7, and the residual ammonium concentration (50 mg/L) remained constant upon addition of more MgO without rising again, as occurs with sodium hydroxide (NaOH). This process may have two other advantages; it also can be relevant for the problem of uncontrolled precipitation of struvite in the supernatant of anaerobic sludge treatment plants, and the precipitate can be used as a fertilizer.
Sensorimotor modulation of mood and depression: In search of an optimal mode of stimulation
RESIT eCANBEYLI
2013-07-01
Full Text Available Depression involves a dysfunction in an affective fronto-limbic circuitry including the prefrontal cortices, several limbic structures including the cingulate cortex, the amygdala and the hippocampus as well as the basal ganglia. A major emphasis of research on the etiology and treatment of mood disorders has been to assess the impact of centrally generated (top-down processes impacting the affective fronto-limbic circuitry. The present review shows that peripheral (bottom-up unipolar stimulation via the visual and the auditory modalities as well as by physical exercise modulates mood and depressive symptoms in humans and animals and activates the same central affective neurocircuitry involved in depression. It is proposed that the amygdala serves as a gateway by articulating the mood regulatory sensorimotor stimulation with the central affective circuitry by emotionally labeling and mediating the storage of such emotional events in long-term memory. Since both amelioration and aggravation of mood is shown to be possible by unipolar stimulation, the review suggests that a psychophysical assessment of mood modulation by multi-modal stimulation may uncover mood ameliorative synergisms and serve as adjunctive treatment for depression. Thus, the integrative review not only emphasizes the relevance of investigating the optimal levels of mood regulatory sensorimotor stimulation, but also provides a conceptual springboard for related future research.
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
A search for extragalactic pulsars in the local group galaxies IC 10 and Barnard’s galaxy
Al Noori, H; Roberts, M S E; Champion, D; McLaughlin, M; Ransom, Scott; Ray, P S
2017-01-01
As of today, more than 2500 pulsars have been found, nearly all in the Milky Way, with the exception of ∼28 pulsars in the Small and Large Magellanic Clouds. However, there have been few published attempts to search for pulsars deeper in our Galactic neighborhood. Two of the more promising Local Group galaxies are IC 10 and NGC 6822 (also known as Barnard’s Galaxy) due to their relatively high star formation rate and their proximity to our galaxy. IC 10 in particular, holds promise as it is the closest starburst galaxy to us and harbors an unusually high number of Wolf-Rayet stars, implying the presence of many neutron stars. We observed IC 10 and NGC 6822 at 820 MHz with the Green Bank Telescope for ∼15 and 5 hours respectively, and put a strong upper limit of 0.1 mJy on pulsars in either of the two galaxies. We also performed single pulse searches of both galaxies with no firm detections. (paper)
Julie-Éléonore Maisonhaute
2011-12-01
Full Text Available A lot of studies focusing on the effect of agricultural landscapes demonstrate that many arthropod species are influenced by landscape structure. In particular, non–crop areas and landscape diversity are often associated with a higher abundance and diversity of natural enemies in fields. Numerous studies focused on the influence of landscape structure on ground beetles, spiders and ladybeetles but few on other natural enemies or different functional groups. Thus, the objective of the present study was to determine the influence of landscape structure on the functional groups, i.e., active-searching predators, furtive predators and parasitoids of aphidophagous guilds. Natural enemies were sampled on milkweed infested with aphids, growing along the borders of ditches adjacent to cornfields. The sampling occurred weekly from June to September in 2006 and 2007, in the region of Lanaudičre (Quebec, Canada. The landscapes within a radius 200 and 500 m around each site were analyzed. The abundance, richness and species composition (based on functional groups of natural enemies were related to landscape structure. The results indicated that landscape structure explained up to 21.6% of the variation in natural enemy assemblage and confirm the positive effects of non-crop areas and landscape diversity. A lower influence of landscape structure on species composition was observed (6.4 to 8.8% and varied greatly among the functional groups. Coccinellidae and furtive predators were the group most influenced by landscape structure. In conclusion, the influence of landscape varied greatly among the different species of the same functional group.
Niki, Yuichiro; Ogawa, Mikako; Makiura, Rie; Magata, Yasuhiro; Kojima, Chie
2015-11-01
The detection of the sentinel lymph node (SLN), the first lymph node draining tumor cells, is important in cancer diagnosis and therapy. Dendrimers are synthetic macromolecules with highly controllable structures, and are potent multifunctional imaging agents. In this study, 12 types of dendrimer of different generations (G2, G4, G6, and G8) and different terminal groups (amino, carboxyl, and acetyl) were prepared to determine the optimal dendrimer structure for SLN imaging. Radiolabeled dendrimers were intradermally administrated to the right footpads of rats. All G2 dendrimers were predominantly accumulated in the kidney. Amino-terminal, acetyl-terminal, and carboxyl-terminal dendrimers of greater than G4 were mostly located at the injection site, in the blood, and in the SLN, respectively. The carboxyl-terminal dendrimers were largely unrecognized by macrophages and T-cells in the SLN. Finally, SLN detection was successfully performed by single photon emission computed tomography imaging using carboxyl-terminal dendrimers of greater than G4. The early detection of tumor cells in the sentinel draining lymph nodes (SLN) is of utmost importance in terms of determining cancer prognosis and devising treatment. In this article, the authors investigated various formulations of dendrimers to determine the optimal one for tumor detection. The data generated from this study would help clinicians to fight the cancer battle in the near future. Copyright © 2015 Elsevier Inc. All rights reserved.
U.A. Bukov
2013-06-01
Full Text Available Purpose – to identify the effectiveness of the use of innovative approaches in physical education teaching process of special medical group students. The study involved 15 boys aged 13-14 years. The lesson include exercises consisting of the elements of Pilates, yoga and static body-oriented therapy. The proposed program of physical exercises performed by students in the main part of the lesson and took the volume to 80% of the time. Set to increase the functionality of the skeletal muscles, the adaptive capacity of cardio-respiratory system, the health and strength of the nervous system, optimization of anthropometric indices, improved spinal mobility. A high degree of efficiency in the learning process of innovation of general preventive and therapeutic intervention is identified. Proposed to use in the educational process modern methods of prevention and correction
Study and analysis of how to optimize marketing & sales of a b-to-b company for the target group
Kauppila, Kasper
2014-01-01
Study and analysis of how to optimize the marketing & sales strategy of a b-to-b company for the target group. Case study for Green Fortune Plantwall Oy. This thesis is a case study of Green Fortune Plantwall Oy. The objective of this thesis is to study and analyze how to optimize the current marketing and sales strategy of a small b-to-b company for its target group (i.e. architects and interior designers).
Amsler, Felix; Willenberg, Torsten; Blättler, Werner
2009-09-01
In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P bandages, has a positive impact on pain, and is easier to use.
Jatzeck, Bernhard Michael
2000-10-01
The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator
The Role of Internal Audit in Optimization of Corporate Governance at the Groups of Companies
Ionel BOSTAN
2010-02-01
categories of users. The role of internal audit of the company, considering the influences of the control of management, assumes a first importance in the corporate governance sphere. This was also the reason why the authors have proposed in the second part of the paper to build a model of optimal risk management in listed and unlisted companies, and based on a model of optimal corporate governance at the level at groups of enterprises, focusing on the fundamental role of the audit.
Offensive Strategy in the 2D Soccer Simulation League Using Multi-Group Ant Colony Optimization
Shengbing Chen
2016-02-01
Full Text Available The 2D soccer simulation league is one of the best test beds for the research of artificial intelligence (AI. It has achieved great successes in the domain of multi-agent cooperation and machine learning. However, the problem of integral offensive strategy has not been solved because of the dynamic and unpredictable nature of the environment. In this paper, we present a novel offensive strategy based on multi-group ant colony optimization (MACO-OS. The strategy uses the pheromone evaporation mechanism to count the preference value of each attack action in different environments, and saves the values of success rate and preference in an attack information tree in the background. The decision module of the attacker then selects the best attack action according to the preference value. The MACO-OS approach has been successfully implemented in our 2D soccer simulation team in RoboCup competitions. The experimental results have indicated that the agents developed with this strategy, along with related techniques, delivered outstanding performances.
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
An improved search for elementary particles with fractional electric charge
Lee, E.R.
1996-08-01
The SLAC Quark Search Group has demonstrated successful operation of a low cost, high mass throughput Millikan apparatus designed to search for fractionally charged particles. About six million silicone oil drops were measured with no evidence of fractional charges. A second experiment is under construction with 100 times greater throughput which will utilize optimized search fluids
Martin del Campo M, C.; Francois L, J.L. [Facultad de Ingenieria, UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico); Palomera P, M.A. [Facultad de Ingenieria, UNAM, Posgrado en Ingenieria en Computacion, Circuito exterior s/n, Ciudad Universitaria, Mexico, D.F. (Mexico)]. e-mail: cmcm@fi-b.unam.mx
2004-07-01
The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)
Z. Masomi Zohrabad
2016-12-01
Full Text Available Power networks continue to grow following the annual growth of energy demand. As constructing new energy generation facilities bears a high cost, minimizing power grid losses becomes essential to permit low cost energy transmission in larger distances and additional areas. This study aims to model an optimization problem for an IEEE 30-bus power grid using a Tabu search algorithm based on an improved hybrid Harmony Search (HS method to reduce overall grid losses. The proposed algorithm is applied to find the best location for the installation of a Unified Power Flow Controller (UPFC. The results obtained from installation of the UPFC in the grid are presented by displaying outputs.
Salmon, BP
2012-07-01
Full Text Available stream_source_info Salmon2_2012.pdf.txt stream_content_type text/plain stream_size 16400 Content-Encoding ISO-8859-1 stream_name Salmon2_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 A SEARCH ALGORITHM TO META... the spectral bands separately and introduced a meta-optimization method for the EKF that will be called the Bias Variance Equilibrium Point (BVEP) in this paper. The objective of this paper is to introduce an unsuper- vised search algorithm called the Bias...
Jie-Sheng Wang
2015-01-01
Full Text Available For predicting the key technology indicators (concentrate grade and tailings recovery rate of flotation process, a feed-forward neural network (FNN based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO algorithm and gravitational search algorithm (GSA is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.
Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review
Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin
2015-12-01
A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on
Jie-sheng Wang
2014-01-01
Full Text Available For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.
Ruggeri, G.; Ergin, G.; Modini, R. L.; Takahama, S.
2013-12-01
The SOAS campaign was conducted from June 1 to July 15 of 2013 in order to understand the relationship between biogenic and anthropogenic emissions in the South East US1,2. In this study, the organic and inorganic composition of submicron aerosol in the Centreville SEARCH site was measured by Fourier Transform Infrared Spectroscopy (FTIR) and the Ambient Ion Monitor (AIM; URG Corporation), whereas the aerosol water content was measured with a Dry Ambient Aerosol Size Spectrometer (DAASS)3. Organic functional group analysis was performed on PM1 aerosol selected by cyclone and collected on teflon filters with a time resolution of 4-12 hours, using one inlet heated to 50 °C and the other operated either at ambient temperature or 70 °C 4. The AIM measured both condensed and gas phase composition with a time resolution of 1 hour, providing partitioning behavior of inorganic species such as NH3/NH4+, HNO3/NO3-. These measurements collectively permit calculation of pure-component vapor pressures of candidate organic compounds and activity coefficients of interacting components in the condensed phase, using models such as SIMPOL.15, E-AIM6, and AIOMFAC7. From these results, the water content of the aerosol is predicted, and a comparison between modeled and measured partitioning of inorganic compounds and water vapor are discussed, in addition to organic aerosol volatility prediction based on functional group analysis. [1]- Goldstein, A.H., et al., Biogenic carbon and anthropogenic pollutants combine to form a cooling haze over the southeastern United States. Proceedings of the National Academy of Sciences of the United States of America, 2009. 106(22), 8835-8840. [2]- Carlton, A.G., Turpin, B.J., 2013. Particle partitioning potential of organic compounds is highest in the Eastern US and driven by anthropogenic water. Atmospheric Chemistry and Physics Discussions 13, 12743-12770. [3]- Khlystov, A., Stanier, C.O., Takahama, S., Pandis, S.N., 2005. Water content of ambient
Spirin, N.V.; Kuznetsov, M.; Kiseleva, Y.; Spirin, Y.V.; Izhutov, P.A.
2015-01-01
Sorting tuples by an attribute value is a common search scenario and many search engines support such capabilities, e.g. price-based sorting in e-commerce, time-based sorting on a job or social media website. However, sorting purely by the attribute value might lead to poor user experience because
Ayse T. Daloglu; Musa Artar; Korhan Ozgan; Ali İ. Karakas
2018-01-01
Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS) and teaching-learning-based optimization (TLBO) algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped...
Hummel, Hans; Geerts, Walter; Slootmaker, Aad; Kuipers, Derek; Westera, Wim
2014-01-01
Serious games can facilitate workplace learning, for instance when collaboration on solving professional problems is involved. The optimal structure in collaboration scripts for such games has appeared to be a key success factor. Free collaboration does not systematically produce effective
Webb, G.A.M.
1989-01-01
In 1984 the International Commission on Radiological Protection established a task group to a report on optimization of protection. This paper outlines the current state of work of the task group, with particular emphasis on the development of various techniques to assist with optimization analyses. It is shown that these quantitative techniques fit within the concept of optimization as a structured approach to problems, and that appropriate technique depends on the level of complexity of the problem. This approach is illustrated by applying a range of different techniques to the same example problem. Finally some comments are made on the application of the procedure, noting the importance of identifying responsibilities from those of individuals to those of competent authorities
Enticott, Joanne; Buck, Kimberly; Shawyer, Frances
2018-03-01
There is a lack of information on how to execute effective searches of the grey literature on refugee and asylum seeker groups for inclusion in systematic reviews. High-quality government reports and other grey literature relevant to refugees may not always be identified in conventional literature searches. During the process of conducting a recent systematic review, we developed a novel strategy for systematically searching international refugee and asylum seeker-related grey literature. The approach targets governmental health departments and statistical agencies, who have considerable access to refugee and asylum seeker populations for research purposes but typically do not publish findings in academic forums. Compared to a conventional grey literature search strategy, our novel technique yielded an eightfold increase in relevant high-quality grey sources that provided valuable content in informing our review. Incorporating a search of the grey literature into systematic reviews of refugee and asylum seeker research is essential to providing a more complete view of the evidence. Our novel strategy offers a practical and feasible method of conducting systematic grey literature searches that may be adaptable to a range of research questions, contexts, and resource constraints. Copyright © 2017 John Wiley & Sons, Ltd.
Chengfen Zhang
2015-01-01
Full Text Available Dry-type air-core reactor is now widely applied in electrical power distribution systems, for which the optimization design is a crucial issue. In the optimization design problem of dry-type air-core reactor, the objectives of minimizing the production cost and minimizing the operation cost are both important. In this paper, a multiobjective optimal model is established considering simultaneously the two objectives of minimizing the production cost and minimizing the operation cost. To solve the multi-objective optimization problem, a memetic evolutionary algorithm is proposed, which combines elitist nondominated sorting genetic algorithm version II (NSGA-II with a local search strategy based on the covariance matrix adaptation evolution strategy (CMA-ES. NSGA-II can provide decision maker with flexible choices among the different trade-off solutions, while the local-search strategy, which is applied to nondominated individuals randomly selected from the current population in a given generation and quantity, can accelerate the convergence speed. Furthermore, another modification is that an external archive is set in the proposed algorithm for increasing the evolutionary efficiency. The proposed algorithm is tested on a dry-type air-core reactor made of rectangular cross-section litz-wire. Simulation results show that the proposed algorithm has high efficiency and it converges to a better Pareto front.
Oostvogels, R; Uniken Venema, S M; de Witte, M; Raymakers, R; Kuball, J; Kröger, N; Minnema, M C
2017-09-01
Allogeneic stem cell transplantation (allo-SCT) has the potential to induce sustained remissions in patients with multiple myeloma (MM). Currently, allo-SCT is primarily performed in high-risk MM patients, most often in the setting of early relapse after first-line therapy with autologous SCT. However, the implementation of allo-SCT for MM is jeopardized by high treatment-related mortality (TRM) rates as well as high relapse rates. In this systematic review, we aimed to identify a safe allo-SCT strategy that has optimal 1-year results regarding mortality, relapse and severe GvHD, creating opportunities for post-transplantation strategies to maintain remissions in the high-risk group of relapsed MM patients. Eleven studies were included. Median PFS ranged from 5.2 to 36.8 months and OS was 13.0 to 63.0 months. The relapse related mortality at 1 year varied between 0 and 50% and TRM between 8 and 40%. Lowest GvHD incidences were reported for conditioning regimens with T-cell depletion using ATG or graft CD34+ selection. Similar strategies could lay the foundation for a post-transplant immune platform, this should be further evaluated in prospective clinical trials.
Kalderstam, Jonas; Edén, Patrik; Ohlsson, Mattias
2015-01-01
We investigate a new method to place patients into risk groups in censored survival data. Properties such as median survival time, and end survival rate, are implicitly improved by optimizing the area under the survival curve. Artificial neural networks (ANN) are trained to either maximize or minimize this area using a genetic algorithm, and combined into an ensemble to predict one of low, intermediate, or high risk groups. Estimated patient risk can influence treatment choices, and is important for study stratification. A common approach is to sort the patients according to a prognostic index and then group them along the quartile limits. The Cox proportional hazards model (Cox) is one example of this approach. Another method of doing risk grouping is recursive partitioning (Rpart), which constructs a decision tree where each branch point maximizes the statistical separation between the groups. ANN, Cox, and Rpart are compared on five publicly available data sets with varying properties. Cross-validation, as well as separate test sets, are used to validate the models. Results on the test sets show comparable performance, except for the smallest data set where Rpart's predicted risk groups turn out to be inverted, an example of crossing survival curves. Cross-validation shows that all three models exhibit crossing of some survival curves on this small data set but that the ANN model manages the best separation of groups in terms of median survival time before such crossings. The conclusion is that optimizing the area under the survival curve is a viable approach to identify risk groups. Training ANNs to optimize this area combines two key strengths from both prognostic indices and Rpart. First, a desired minimum group size can be specified, as for a prognostic index. Second, the ability to utilize non-linear effects among the covariates, which Rpart is also able to do.
Hummel, Hans; Geerts, Walter; Slootmaker, Aad; Kuipers, Derek; Westera, Wim
2014-01-01
The optimal structure in collaboration scripts for serious games has appeared to be a key success factor. In this study we compare a ‘high- structured’ and ‘low-structured’ version of a mastership game where teachers-in-training discuss solutions on classroom dilemmas. We collected data on the
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-01-01
In the field of computer-aided mammographic mass detection, many different features and classifiers have been tested. Frequently, the relevant features and optimal topology for the artificial neural network (ANN)-based approaches at the classification stage are unknown, and thus determined by trial-and-error experiments. In this study, we analyzed a classifier that evolves ANNs using genetic algorithms (GAs), which combines feature selection with the learning task. The classifier named "Phased Searching with NEAT in a Time-Scaled Framework" was analyzed using a dataset with 800 malignant and 800 normal tissue regions in a 10-fold cross-validation framework. The classification performance measured by the area under a receiver operating characteristic (ROC) curve was 0.856 ± 0.029. The result was also compared with four other well-established classifiers that include fixed-topology ANNs, support vector machines (SVMs), linear discriminant analysis (LDA), and bagged decision trees. The results show that Phased Searching outperformed the LDA and bagged decision tree classifiers, and was only significantly outperformed by SVM. Furthermore, the Phased Searching method required fewer features and discarded superfluous structure or topology, thus incurring a lower feature computational and training and validation time requirement. Analyses performed on the network complexities evolved by Phased Searching indicate that it can evolve optimal network topologies based on its complexification and simplification parameter selection process. From the results, the study also concluded that the three classifiers - SVM, fixed-topology ANN, and Phased Searching with NeuroEvolution of Augmenting Topologies (NEAT) in a Time-Scaled Framework - are performing comparably well in our mammographic mass detection scheme.
Martin del Campo, C.; Francois, J.L. [Laboratorio de Analisis en Ingenieria de Reactores Nucleares, FI-UNAM, Paseo Cuauhnahuac 8532, Jiutepec, Morelos (Mexico)
2003-07-01
The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)
A Direct Algorithm Maple Package of One-Dimensional Optimal System for Group Invariant Solutions
Zhang, Lin; Han, Zhong; Chen, Yong
2018-01-01
To construct the one-dimensional optimal system of finite dimensional Lie algebra automatically, we develop a new Maple package One Optimal System. Meanwhile, we propose a new method to calculate the adjoint transformation matrix and find all the invariants of Lie algebra in spite of Killing form checking possible constraints of each classification. Besides, a new conception called invariance set is raised. Moreover, this Maple package is proved to be more efficiency and precise than before by applying it to some classic examples. Supported by the Global Change Research Program of China under Grant No. 2015CB95390, National Natural Science Foundation of China under Grant Nos. 11675054 and 11435005, and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in
Optimal Control of a Wind Farm Group Using the WindEx System
Piotr Kacejko
2014-09-01
Full Text Available The aim of this paper is to present achievements obtained in implementing the framework project N R01 0021 06 in the Power System Department of Lublin University of Technology. The result of the work was “A system of optimal wind farm power control in the conditions of limited transmission capabilities of power networks”, which one of two main modules is a state estimator. The featured wind farm control system was integrated with a SCADA dispatcher system WindEx using the WebSVC service.
M. AKBARI
2013-12-01
Full Text Available Energy group structure has a significant effect on the results of multigroup transport calculations. It is known that UO2–PUO2 (MOX is a recently developed fuel which consumes recycled plutonium. For such fuel which contains various resonant nuclides, the selection of energy group structure is more crucial comparing to the UO2 fuels. In this paper, in order to improve the accuracy of the integral results in MOX thermal lattices calculated by WIMSD-5B code, a swarm intelligence method is employed to optimize the energy group structure of WIMS library. In this process, the NJOY code system is used to generate the 69 group cross sections of WIMS code for the specified energy structure. In addition, the multiplication factor and spectral indices are compared against the results of continuous energy MCNP-4C code for evaluating the energy group structure. Calculations performed in four different types of H2O moderated UO2–PuO2 (MOX lattices show that the optimized energy structure obtains more accurate results in comparison with the WIMS original structure.
Optimization of multi-group cross sections for fast reactor analysis
Chin, M. R.; Manalo, K. L.; Edgar, C. A.; Paul, J. N.; Molinar, M. P.; Redd, E. M.; Yi, C.; Sjoden, G. E.
2013-01-01
The selection of the number of broad energy groups, collapsed broad energy group boundaries, and their associated evaluation into collapsed macroscopic cross sections from a general 238-group ENDF/B-VII library dramatically impacted the k eigenvalue for fast reactor analysis. An analysis was undertaken to assess the minimum number of energy groups that would preserve problem physics; this involved studies using the 3D deterministic transport parallel code PENTRAN, the 2D deterministic transport code SCALE6.1, the Monte Carlo based MCNP5 code, and the YGROUP cross section collapsing tool on a spatially discretized MOX fuel pin comprised of 21% PUO 2 -UO 2 with sodium coolant. The various cases resulted in a few hundred pcm difference between cross section libraries that included the 238 multi-group reference, and cross sections rendered using various reaction and adjoint weighted cross sections rendered by the YGROUP tool, and a reference continuous energy MCNP case. Particular emphasis was placed on the higher energies characteristic of fission neutrons in a fast spectrum; adjoint computations were performed to determine the average per-group adjoint fission importance for the MOX fuel pin. This study concluded that at least 10 energy groups for neutron transport calculations are required to accurately predict the eigenvalue for a fast reactor system to within 250 pcm of the 238 group case. In addition, the cross section collapsing/weighting schemes within YGROUP that provided a collapsed library rendering eigenvalues closest to the reference were the contribution collapsed, reaction rate weighted scheme. A brief analysis on homogenization of the MOX fuel pin is also provided, although more work is in progress in this area. (authors)
Salmon
2012-07-01
Full Text Available stream_source_info Salmon1_2012_ABSTRACT ONLY.pdf.txt stream_content_type text/plain stream_size 1654 Content-Encoding ISO-8859-1 stream_name Salmon1_2012_ABSTRACT ONLY.pdf.txt Content-Type text/plain; charset=ISO-8859...-1 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22-27 July 2012 A search algorithm to meta-optimize the parameters for an extended Kalman filter to improve classification on hyper-temporal images yzB.P. Salmon, yz...
Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-08-27
The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.
Report of the advisory group meeting on optimal use of accelerator-based neutron generators
1998-01-01
During the past 20 to 25 years, the IAEA has provided a number of laboratories in the developing member states with neutron generators. These neutron generators were originally supplied for the primary purpose of neutron activation analysis. In order to promote the optimal use of these machines, a meeting was held in 1996, resulting in a technical document manual for the upgrading and troubleshooting of neutron generators. The present meeting is a follow-up to that earlier meeting. There are several reasons why some neutron generators are not fully utilized. These include lack of infrastructure, such as an appropriate shielded building and loss of adequately trained technical and academic personnel. Much of the equipment is old and lacking spare parts, and in a few cases there is a critical lack of locally available knowledge and experience in accelerator technology. The report contains recommendations for dealing with these obstacles
Aspelund, Audun
2012-01-01
Process Synthesis (PS) is a term used to describe a class of general and systematic methods for the conceptual design of processing plants and energy systems. The term also refers to the development of the process flowsheet (structure or topology), the selection of unit operations and the determination of the most important operating conditions.In this thesis an attempt is made to characterize some of the most common methodologies in a PS pyramid and discuss their advantages and disadvantages as well as where in the design phase they could be used most efficiently. The thesis shows how design tools have been developed for subambient processes by combining and expanding PS methods such as Heuristic Rules, sequential modular Process Simulations, Pinch Analysis, Exergy Analysis, Mathematical Programming using Deterministic Optimization methods and optimization using Stochastic Optimization methods. The most important contributions to the process design community are three new methodologies that include the pressure as an important variable in heat exchanger network synthesis (HENS).The methodologies have been used to develop a novel and efficient energy chain based on stranded natural gas including power production with carbon capture and sequestration (CCS). This Liquefied Energy Chain consists of an offshore process a combined gas carrier and an onshore process. This energy chain is capable of efficiently exploiting resources that cannot be utilized economically today with minor Co2 emissions. Finally, a new Stochastic Optimization approach based on a Tabu Search (TS), the Nelder Mead method or Downhill Simplex Method (NMDS) and the sequential process simulator HYSYS is used to search for better solutions for the Liquefied Energy Chain with respect to minimum cost or maximum profit. (au)
Aspelund, Audun
2012-07-01
Process Synthesis (PS) is a term used to describe a class of general and systematic methods for the conceptual design of processing plants and energy systems. The term also refers to the development of the process flowsheet (structure or topology), the selection of unit operations and the determination of the most important operating conditions.In this thesis an attempt is made to characterize some of the most common methodologies in a PS pyramid and discuss their advantages and disadvantages as well as where in the design phase they could be used most efficiently. The thesis shows how design tools have been developed for subambient processes by combining and expanding PS methods such as Heuristic Rules, sequential modular Process Simulations, Pinch Analysis, Exergy Analysis, Mathematical Programming using Deterministic Optimization methods and optimization using Stochastic Optimization methods. The most important contributions to the process design community are three new methodologies that include the pressure as an important variable in heat exchanger network synthesis (HENS).The methodologies have been used to develop a novel and efficient energy chain based on stranded natural gas including power production with carbon capture and sequestration (CCS). This Liquefied Energy Chain consists of an offshore process a combined gas carrier and an onshore process. This energy chain is capable of efficiently exploiting resources that cannot be utilized economically today with minor Co2 emissions. Finally, a new Stochastic Optimization approach based on a Tabu Search (TS), the Nelder Mead method or Downhill Simplex Method (NMDS) and the sequential process simulator HYSYS is used to search for better solutions for the Liquefied Energy Chain with respect to minimum cost or maximum profit. (au)
2015-05-01
decisions on the fly in an online retail environment. Tech. rep., Working Paper, Massachusetts Institute of Technology, Boston, MA. Arneson, Broderick , Ryan...Hayward, Philip Henderson. 2009. MoHex wins Hex tournament. International Computer Games Association Journal 32 114–116. Arneson, Broderick , Ryan B...Combina- torial Search. Enzenberger, Markus, Martin Muller, Broderick Arneson, Richard Segal. 2010. Fuego—an open-source framework for board games and
Chen Wang
2016-01-01
Full Text Available Power systems could be at risk when the power-grid collapse accident occurs. As a clean and renewable resource, wind energy plays an increasingly vital role in reducing air pollution and wind power generation becomes an important way to produce electrical power. Therefore, accurate wind power and wind speed forecasting are in need. In this research, a novel short-term wind speed forecasting portfolio has been proposed using the following three procedures: (I data preprocessing: apart from the regular normalization preprocessing, the data are preprocessed through empirical model decomposition (EMD, which reduces the effect of noise on the wind speed data; (II artificially intelligent parameter optimization introduction: the unknown parameters in the support vector machine (SVM model are optimized by the cuckoo search (CS algorithm; (III parameter optimization approach modification: an improved parameter optimization approach, called the SDCS model, based on the CS algorithm and the steepest descent (SD method is proposed. The comparison results show that the simple and effective portfolio EMD-SDCS-SVM produces promising predictions and has better performance than the individual forecasting components, with very small root mean squared errors and mean absolute percentage errors.
Washington, J., E-mail: jwashing@gmail.com; King, J., E-mail: kingjc@mines.edu
2017-01-15
Highlights: • We model a modified AP1000 fuel assembly in SCALE6.1. • We couple the NEWT module of SCALE to the MOGA module of DAKOTA. • Transmutation is optimized based on choice of coating and fuel. • Greatest transmutation achieved with PuZrO{sub 2}MgO fuel pins coated with Lu{sub 2}O{sub 3}. - Abstract: The average nuclear power plant produces twenty metric tons of used nuclear fuel per year, which contains approximately 95 wt% uranium, 1 wt% plutonium, and 4 wt% fission products and transuranic elements. Fast reactors are the preferred option for the transmutation of plutonium and minor actinides; however, an optimistic deployment time of at least 20 years indicates a need for a near-term solution. Previous simulation work demonstrated the potential to transmute transuranic elements in a modified light water reactor fuel pin. This study optimizes a quarter-assembly containing target fuels coated with spectral shift absorbers for the transmutation of plutonium and minor actinides in light water reactors. The spectral shift absorber coating on the target fuel pin tunes the neutron energy spectrum experienced by the target fuel. A coupled model developed using the NEWT module from SCALE 6.1 and a genetic algorithm module from the DAKOTA optimization toolbox provided performance data for the burnup of the target fuel pins in the present study. The optimization with the coupled NEWT/DAKOTA model proceeded in three stages. The first stage optimized a single-target fuel pin per quarter-assembly adjacent to the central instrumentation channel. The second stage evaluated a variety of quarter-assemblies with multiple target fuel pins from the first stage and the third stage re-optimized the pins in the optimal second stage quarter-assembly. An 8 wt% PuZrO{sub 2}MgO inert matrix fuel pin with a 1.44 mm radius and a 0.06 mm Lu{sub 2}O{sub 3} coating in a five target fuel pin per quarter-assembly configuration represents the optimal combination for the
Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt
2017-09-01
A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.
Strother, Stephen C.; Conte, Stephen La; Hansen, Lars Kai
2004-01-01
We argue that published results demonstrate that new insights into human brain function may be obscured by poor and/or limited choices in the data-processing pipeline, and review the work on performance metrics for optimizing pipelines: prediction, reproducibility, and related empirical Receiver......, temporal detrending, and between-subject alignment) in a group analysis of BOLD-fMRI scans from 16 subjects performing a block-design, parametric-static-force task. Large-scale brain networks were detected using a multivariate linear discriminant analysis (canonical variates analysis, CVA) that was tuned...... of baseline scans have constant, equal means, and this assumption was assessed with prediction metrics. Higher-order polynomial warps compared to affine alignment had only a minor impact on the performance metrics. We found that both prediction and reproducibility metrics were required for optimizing...
Lazzarini, Albert; Reilly, Kaice; Whitcomb, Stan; Bose, Sukanta; Fritschel, Peter; McHugh, Martin; Whelan, John T.; Regimbau, Tania; Romano, Joseph D.; Whiting, Bernard F.
2004-01-01
This article derives an optimal (i.e., unbiased, minimum variance) estimator for the pseudodetector strain for a pair of colocated gravitational wave interferometers (such as the pair of LIGO interferometers at its Hanford Observatory), allowing for possible instrumental correlations between the two detectors. The technique is robust and does not involve any assumptions or approximations regarding the relative strength of gravitational wave signals in the Hanford pair with respect to other sources of correlated instrumental or environmental noise. An expression is given for the effective power spectral density of the combined noise in the pseudodetector. This can then be introduced into the standard optimal Wiener filter used to cross-correlate detector data streams in order to obtain an optimal estimate of the stochastic gravitational wave background. In addition, a dual to the optimal estimate of strain is derived. This dual is constructed to contain no gravitational wave signature and can thus be used as an 'off-source' measurement to test algorithms used in the 'on-source' observation
Ming Chen
2015-11-01
Full Text Available In multi-criteria group decision-making (MCGDM, one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value of each criterion when he/she needs to rank the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efficient algorithm for the optimal weights. Some applications are employed to show the significance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models more effectively solve the MCGDM problems including the ones with incomplete score matrices, compared with the methods available in the literature. Specifically, by the proposed method, (1 the evaluation uniformity of each expert on the same criteria is guaranteed; (2 The overall evaluation for each alternative can collect the judgements of the experts as many as possible; (3 The highest discrimination degree of the alternatives is obtained.
Kaberi Dasgupta
Full Text Available OBJECTIVE: We performed a qualitative study among women within 5 years of Gestational Diabetes (GDM diagnosis. Our aim was to identify the key elements that would enhance participation in a type 2 diabetes (DM2 prevention program. RESEARCH DESIGN AND METHODS: Potential participants received up to three invitation letters from their GDM physician. Four focus groups were held. Discussants were invited to comment on potential facilitators/barriers to participation and were probed on attitudes towards meal replacement and Internet/social media tools. Recurring themes were identified through qualitative content analysis of discussion transcripts. RESULTS: Among the 1,201 contacted and 79 eligible/interested, 29 women attended a focus group discussion. More than half of discussants were overweight/obese, and less than half were physically active. For DM2 prevention, a strong need for social support to achieve changes in dietary and physical activity habits was expressed. In this regard, face-to-face interactions with peers and professionals were preferred, with adjunctive roles for Internet/social media. Further, direct participation of partners/spouses in a DM2 prevention program was viewed as important to enhance support for behavioural change at home. Discussants highlighted work and child-related responsibilities as potential barriers to participation, and emphasized the importance of childcare support to allow attendance. Meal replacements were viewed with little interest, with concerns that their use would provide a poor example of eating behaviour to children. CONCLUSIONS: Among women within 5 years of a GDM diagnosis who participated in a focus group discussion, participation in a DM2 prevention program would be enhanced by face-to-face interactions with professionals and peers, provision of childcare support, and inclusion of spouses/partners.
Dasgupta, Kaberi; Da Costa, Deborah; Pillay, Sabrina; De Civita, Mirella; Gougeon, Réjeanne; Leong, Aaron; Bacon, Simon; Stotland, Stephen; Chetty, V Tony; Garfield, Natasha; Majdan, Agnieszka; Meltzer, Sara
2013-01-01
We performed a qualitative study among women within 5 years of Gestational Diabetes (GDM) diagnosis. Our aim was to identify the key elements that would enhance participation in a type 2 diabetes (DM2) prevention program. Potential participants received up to three invitation letters from their GDM physician. Four focus groups were held. Discussants were invited to comment on potential facilitators/barriers to participation and were probed on attitudes towards meal replacement and Internet/social media tools. Recurring themes were identified through qualitative content analysis of discussion transcripts. Among the 1,201 contacted and 79 eligible/interested, 29 women attended a focus group discussion. More than half of discussants were overweight/obese, and less than half were physically active. For DM2 prevention, a strong need for social support to achieve changes in dietary and physical activity habits was expressed. In this regard, face-to-face interactions with peers and professionals were preferred, with adjunctive roles for Internet/social media. Further, direct participation of partners/spouses in a DM2 prevention program was viewed as important to enhance support for behavioural change at home. Discussants highlighted work and child-related responsibilities as potential barriers to participation, and emphasized the importance of childcare support to allow attendance. Meal replacements were viewed with little interest, with concerns that their use would provide a poor example of eating behaviour to children. Among women within 5 years of a GDM diagnosis who participated in a focus group discussion, participation in a DM2 prevention program would be enhanced by face-to-face interactions with professionals and peers, provision of childcare support, and inclusion of spouses/partners.
Alperet, Derrick Johnston; Lim, Wei-Yen; Mok-Kwee Heng, Derrick; Ma, Stefan; van Dam, Rob M
2016-10-01
To identify optimal anthropometric measures and cutoffs to identify undiagnosed diabetes mellitus (UDM) in three major Asian ethnic groups (Chinese, Malays, and Asian-Indians). Cross-sectional data were analyzed from 14,815 ethnic Chinese, Malay, and Asian-Indian participants of the Singapore National Health Surveys, which included anthropometric measures and an oral glucose tolerance test. Receiver operating characteristic curve analyses were used with calculation of the area under the curve (AUC) to evaluate the performance of body mass index (BMI), waist circumference (WC), waist-to-hip ratio (WHR), and waist-to-height ratio (WHTR) for the identification of UDM. BMI performed significantly worse (AUCMEN = 0.70; AUCWOMEN = 0.75) than abdominal measures, whereas WHTR (AUCMEN = 0.76; AUCWOMEN = 0.79) was among the best performing measures in both sexes and all ethnic groups. Anthropometric measures performed better in Chinese than in Asian-Indian participants for the identification of UDM. A WHTR cutoff of 0.52 appeared optimal with a sensitivity of 76% in men and 73% in women and a specificity of 63% in men and 70% in women. Although ethnic differences were observed in the performance of anthropometric measures for the identification of UDM, abdominal adiposity measures generally performed better than BMI, and WHTR performed best in all Asian ethnic groups. © 2016 The Obesity Society.
Albert, Andreas; Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio; De Roeck,Albert; Doglioni, Caterina; DuPree, Tristan; Fairbairn, Malcolm; Genest, Marie-Hélène; Gori, Stefania; Gustavino, Giuliano; Hahn, Kristian; Haisch, Ulrich; Harris, Philip C.; Hayden, Dan; Ippolito, Valerio; John, Isabelle; Kahlhoefer, Felix; Kulkarni, Suchita; Landsberg, Greg; Lowette, Steven; Mawatari, Kentarou; Riotto, Antonio; Shepherd, William; Tait, Tim M.P.; Tolley, Emma; Tunney, Patrick; Zaldivar, Bryan; Zinser, Markus
Weakly-coupled TeV-scale particles may mediate the interactions between normal matter and dark matter. If so, the LHC would produce dark matter through these mediators, leading to the familiar "mono-X" search signatures, but the mediators would also produce signals without missing momentum via the same vertices involved in their production. This document from the LHC Dark Matter Working Group suggests how to compare searches for these two types of signals in case of vector and axial-vector mediators, based on a workshop that took place on September 19/20, 2016 and subsequent discussions. These suggestions include how to extend the spin-1 mediated simplified models already in widespread use to include lepton couplings. This document also provides analytic calculations of the relic density in the simplified models and reports an issue that arose when ATLAS and CMS first began to use preliminary numerical calculations of the dark matter relic density in these models.
Optimization of measurement geometries used by the C.I.R. 'Gamma Spectrometry' working group
Escarieux, M.
1979-01-01
The choice of measurement geometry is closely tied to the objective sought in gamma quantitative analysis which consists in identifying the radionuclides present in a sample and in determining the voluminal quantities. The too low efficiency of the detector and the levels of activity sought make it necessary to place the sample in contact with the casing of the detector and select a sample geometry suited to the measurement. In point of fact this choice is often determined by other criteria, availability of the container for example, and this leads the laboratories taking part in the 'Gamma Spectrometry' Working Group of the Comite d'Instrumentation de Radioprotection to adopt joint gamma measurement geometries [fr
Zhang, Chu; Zhou, Jianzhong; Li, Chaoshun; Fu, Wenlong; Peng, Tian
2017-01-01
Highlights: • A novel hybrid approach is proposed for wind speed forecasting. • The variational mode decomposition (VMD) is optimized to decompose the original wind speed series. • The input matrix and parameters of ELM are optimized simultaneously by using a hybrid BSA. • Results show that OVMD-HBSA-ELM achieves better performance in terms of prediction accuracy. - Abstract: Reliable wind speed forecasting is essential for wind power integration in wind power generation system. The purpose of paper is to develop a novel hybrid model for short-term wind speed forecasting and demonstrates its efficiency. In the proposed model, a compound structure of extreme learning machine (ELM) based on feature selection and parameter optimization using hybrid backtracking search algorithm (HBSA) is employed as the predictor. The real-valued BSA (RBSA) is exploited to search for the optimal combination of weights and bias of ELM while the binary-valued BSA (BBSA) is exploited as a feature selection method applying on the candidate inputs predefined by partial autocorrelation function (PACF) values to reconstruct the input-matrix. Due to the volatility and randomness of wind speed signal, an optimized variational mode decomposition (OVMD) is employed to eliminate the redundant noises. The parameters of the proposed OVMD are determined according to the center frequencies of the decomposed modes and the residual evaluation index (REI). The wind speed signal is decomposed into a few modes via OVMD. The aggregation of the forecasting results of these modes constructs the final forecasting result of the proposed model. The proposed hybrid model has been applied on the mean half-hour wind speed observation data from two wind farms in Inner Mongolia, China and 10-min wind speed data from the Sotavento Galicia wind farm are studied as an additional case. Parallel experiments have been designed to compare with the proposed model. Results obtained from this study indicate that the
Ferrari, A; Rubbia, A; Rubbia, C; Sala, P R
2002-01-01
In this paper, we perform a systematic study of particle production and neutrino yields for different incident proton energies E p and baselines L, with the aim of optimizing the parameters of a neutrino beam for the investigation of θ 13 -driven neutrino oscillations in the Δm 2 range allowed by Superkamiokande results. We study the neutrino energy spectra in the 'relevant' region of the first maximum of the oscillation at a given baseline L. We find that to each baseline L corresponds an 'optimal' proton energy E p which minimizes the required integrated proton intensity needed to observe a fixed number of oscillated events. In addition, we find that the neutrino event rate in the relevant region scales approximately linearly with the proton energy. Hence, baselines L and proton energies E p can be adjusted and the performance for neutrino oscillation searches will remain approximately unchanged provided that the product of the proton energy times the number of protons on target remains constant. We apply these ideas to the specific cases of 2.2, 4.4, 20, 50 and 400 GeV protons. We simulate focusing systems that are designed to best capture the secondary pions of the 'optimal' energy. We compute the expected sensitivities to sin 2 2θ 13 for the various configurations by assuming the existence of new-generation accelerators able to deliver integrated proton intensities on target times the proton energy of the order of O(5x10 23 ) GeVxpot/year
Ping Jiang
2014-01-01
Full Text Available With rapid economic growth, electricity demand is clearly increasing. It is difficult to store electricity for future use; thus, the electricity demand forecast, especially the electricity consumption forecast, is crucial for planning and operating a power system. Due to various unstable factors, it is challenging to forecast electricity consumption. Therefore, it is necessary to establish new models for accurate forecasts. This study proposes a hybrid model, which includes data selection, an abnormality analysis, a feasibility test, and an optimized grey model to forecast electricity consumption. First, the original electricity consumption data are selected to construct different schemes (Scheme 1: short-term selection and Scheme 2: long-term selection; next, the iterative algorithm (IA and cuckoo search algorithm (CS are employed to select the best parameter of GM(1,1. The forecasted day is then divided into several smooth parts because the grey model is highly accurate in the smooth rise and drop phases; thus, the best scheme for each part is determined using the grey correlation coefficient. Finally, the experimental results indicate that the GM(1,1 optimized using CS has the highest forecasting accuracy compared with the GM(1,1 and the GM(1,1 optimized using the IA and the autoregressive integrated moving average (ARIMA model.
Using selective chromogenic plates to optimize isolation of group B Streptococcus in pregnant women
Romano Mattei
2014-03-01
Full Text Available Group B Streptococcus (GBS remains the leading cause of severe bacterial infections (sepsis, meningitis, pneumonia in neonates. We compared the detection of GBS from recto-vaginal swabs on blood agar and two chromogenic media and evaluated their antibiotic susceptibility. A total of 1351 swabs were taken from pregnant women at 35-37 weeks of gestation. Following enrichment in Todd Hewitt broth + nalidixic acid and colistin, the samples were plated on Columbia CNA agar (CNA, chromID Strepto B agar (STRB and Granada Agar (GRAN, respectively. GBS were found in 22.4% of recto-vaginal swabs from pregnant women. Sensitivity, specificity, positive and negative predictive values of GBS detection were 88%, 88%, 81% and 96% for CNA, 99%, 97%, 90% and 99% for STRB and 94%, 99%, 98% e 99% for GRAN; Cohen’s k index concordances for CNA, STREB and GRAN were 0.68, 0.92 and 0.96, respectively. All isolates were susceptible to penicillin, whereas resistances of erythromycin and clindamycin were 40% and 42%, respectively. To conclude, selective broth enrichment combined with chromogenic plates is recommended for GBS screening in pregnant women.
Xian Chunyu; Zhang Zongyao
2003-01-01
The expert knowledge library for Daya Bay and Qinshan phase II NPP has been established based on expert knowledge, and the reload core loading pattern heuristic search is performed. The in-core fuel management code system INCORE that has been used in engineering design is employed for neutron calculation, and loading pattern is evaluated by using of cycle length and core radial power peaking factor. The developed system SEDRIO/INCORE has been applied in cycle 4 for unit 2 of Daya Bay NPP and cycle 4 for Phase II in Qinshan NPP. The application demonstrated that the loading patterns obtained by SEDRIO/INCORE system are much better than reference ones from the view of the radial power peak and the cycle length
IJzendoorn, van M.H.; Bakermans-Kranenburg, M.J.
1996-01-01
This meta-analysis on 33 studies, including more than 2,000 Adult Attachment Interview (AAI) classifications, presents distributions of AAI classifications in samples of nonclinical fathers and mothers, in adolescents, in samples from different cultures, and in clinical groups. Fathers, adolescents,
Khakyzadeh, Vahid; Zolfigol, Mohammad Ali; Derakhshan-Panah, Fatemeh; Jafarian, Majid; Miri, Mir Vahid; Gilandoust, Maryam
2018-01-04
The aim of this work is to introduce, model, and optimize a new non-acid-catalyzed system for a direct N[Formula: see text]N-C bond formation. By reacting naphthols or phenol with anilines in the presence of the sodium nitrite as nitrosonium ([Formula: see text] source and triethylammonium acetate (TEAA), a N[Formula: see text]N-C group can be formed in non-acid media. Modeling and optimization of the reaction conditions were investigated by response surface method. Sodium nitrite, TEAA, and water were chosen as variables, and reaction yield was also monitored. Analysis of variance indicates that a second-order polynomial model with F value of 35.7, a P value of 0.0001, and regression coefficient of 0.93 is able to predict the response. Based on the model, the optimum process conditions were introduced as 2.2 mmol sodium nitrite, 2.2 mL of TEAA, and 0.5 mL [Formula: see text] at room temperature. A quadratic (second-order) polynomial model, by analysis of variance, was able to predict the response for a direct N=N-C group formation. Predicted response values were in good agreement with the experimental values. Electrochemistry studies were done to introduce new Michael acceptor moieties. Broad scope, high yields, short reaction time, and mild conditions are some advantages of the presented method.
Tian, Hao; Yuan, Xiaohui; Ji, Bin; Chen, Zhihuan
2014-01-01
Highlights: • An improved non-dominated sorting gravitational search algorithm (NSGSA-CM) is proposed. • NSGSA-CM is used to solve the problem of short-term multi-objective hydrothermal scheduling. • We enhance the search capability of NSGSA-CM by chaotic mutation. • New strategies are devised to handle various constraints in NSGSA-CM. • We obtain better compromise solutions with less fuel cost and emissions. - Abstract: This paper proposes a non-dominated sorting gravitational search algorithm with chaotic mutation (NSGSA-CM) to solve short-term economic/environmental hydrothermal scheduling (SEEHTS) problem. The SEEHTS problem is formulated as a multi-objective optimization problem with many equality and inequality constraints. By introducing the concept of non-dominated sorting and crowding distance, NSGSA-CM can optimize two objectives of fuel cost and pollutant emission simultaneously and obtain a set of Pareto optimal solutions in one trial. In order to improve the performance of NSGSA-CM, the paper introduces particle memory character and population social information in velocity update process. And a chaotic mutation is adopted to prevent the premature convergence. Furthermore, NSGSA-CM utilizes an elitism strategy which selects better solutions in parent and offspring populations based on their non-domination rank and crowding distance to update new generations. When dealing with the constraints of the SEEHTS, new strategies without penalty factors are proposed. In order to handle the water dynamic balance and system load balance constraints, this paper uses a combined strategy which adjusts the violation averagely to each decision variable at first and adjusts the rest violation randomly later. Meanwhile, a new symmetrical adjustment strategy by modifying the discharges at current and later interval without breaking water dynamic balance is adopted to handle reservoir storage constraints. To test the performance of the proposed NSGSA
Vinoth, S.; Kanimozhi, G.; Kumar, Harish; Srinadhu, E. S.; Satyanarayana, N.
2017-12-01
In the present investigation, the recently developed, simple, robust, and powerful metaheuristic symbiotic organism search (SOS) algorithm was used for simulation of J- V characteristics and optimizing the internal parameters of the dye-sensitized solar cells (DSSCs) fabricated using electrospun 1-D mesoporous TiO2 nanofibers as photoanode. The efficiency ( η = 5.80 %) of the DSSC made up of TiO2 nanofibers as photoanode is found to be ˜ 21.59% higher compared to the efficiency ( η = 4.77 %) of the DSSC made up of TiO2 nanoparticles as photoanode. The observed high efficiency can be attributed to high dye loading as well as high electron transport in the mesoporous 1-D TiO2 nanofibers. Further, the validity and advantage of SOS algorithm are verified by simulating J- V characteristics of DSSC with Lambert-W function.
Neumann, M E
1999-01-01
The goals are simple: Improve well-being of the dialysis patient and reduce hospitalizations. The tools are diverse: Ultrapure dialysate. On-line blood monitoring. Biocompatible membranes. No reuse. Daily, in-center dialysis and possibly nocturnal dialysis at home. Reimbursement: Full-risk capitation, With Medicare and commercial payor rates varying on a patient-by-patient basis. Create an incubator with approximately 1,000 end-stage renal disease patients, treated at both capitated payment-exclusive dialysis units and mingled in at traditional fee-for-service clinics. Establish a team of nurses and renal care staff to direct the care plan, and put the program in place. After the first year, analyze the data and see if the end--hopefully, improved outcomes and resulting reduced hospitalizations--justifies the means--the higher cost for "optimal technologies."
Arora, Richa; Behera, Shuvashish; Sharma, Nilesh K; Kumar, Sachin
2015-01-01
The progressive rise in energy crisis followed by green house gas (GHG) emissions is serving as the driving force for bioethanol production from renewable resources. Current bioethanol research focuses on lignocellulosic feedstocks as these are abundantly available, renewable, sustainable and exhibit no competition between the crops for food and fuel. However, the technologies in use have some drawbacks including incapability of pentose fermentation, reduced tolerance to products formed, costly processes, etc. Therefore, the present study was carried out with the objective of isolating hexose and pentose fermenting thermophilic/thermotolerant ethanologens with acceptable product yield. Two thermotolerant isolates, NIRE-K1 and NIRE-K3 were screened for fermenting both glucose and xylose and identified as Kluyveromyces marxianus NIRE-K1 and K. marxianus NIRE-K3. After optimization using Face-centered Central Composite Design (FCCD), the growth parameters like temperature and pH were found to be 45.17°C and 5.49, respectively for K. marxianus NIRE-K1 and 45.41°C and 5.24, respectively for K. marxianus NIRE-K3. Further, batch fermentations were carried out under optimized conditions, where K. marxianus NIRE-K3 was found to be superior over K. marxianus NIRE-K1. Ethanol yield (Y x∕s ), sugar to ethanol conversion rate (%), microbial biomass concentration (X) and volumetric product productivity (Q p ) obtained by K. marxianus NIRE-K3 were found to be 9.3, 9.55, 14.63, and 31.94% higher than that of K. marxianus NIRE-K1, respectively. This study revealed the promising potential of both the screened thermotolerant isolates for bioethanol production.
Richa eArora
2015-09-01
Full Text Available The progressive rise in energy crisis followed by green house gas (GHG emissions is serving as the driving force for bioethanol production from renewable resources. Current bioethanol research focuses on lignocellulosic feedstocks as these are abundantly available, renewable, sustainable and exhibit no competition between the crops for food and fuel. However, the technologies in use have some drawbacks including incapability of pentose fermentation, reduced tolerance to products formed, costly processes, etc. Therefore, the present study was carried out with the objective of isolating hexose and pentose fermenting thermophilic/ thermotolerant ethanologens with acceptable product yield. Two thermotolerant isolates, NIRE-K1 and NIRE-K3 were screened for fermenting both glucose and xylose and identified as Kluyveromyces marxianus NIRE-K1 and K. marxianus NIRE-K3. After optimization using FCCD (Face-centered Central Composite Design, the growth parameters like temperature and pH were found to be 45.17 oC and 5.49, respectively for K. marxianus NIRE-K1 and 45.41 oC and 5.24, respectively for K. marxianus NIRE-K3. Further, batch fermentations were carried out under optimized conditions, where K. marxianus NIRE-K3 was found to be superior over K. marxianus NIRE-K1. Ethanol yield (Yx/s, sugar to ethanol conversion rate (%, microbial biomass concentration (X and volumetric product productivity (Qp obtained by K. marxianus NIRE-K3 were found to be 9.3%, 9.55%, 14.63% and 31.94% higher than that of K. marxianus NIRE-K1, respectively. This study revealed the promising potential of both the screened thermotolerant isolates for bioethanol production.
Arijit Saha
2018-01-01
Full Text Available The analysis of shallow foundations subjected to seismic loading has been an important area of research for civil engineers. This paper presents an upper-bound solution for bearing capacity of shallow strip footing considering composite failure mechanisms by the pseudodynamic approach. A recently developed hybrid symbiosis organisms search (HSOS algorithm has been used to solve this problem. In the HSOS method, the exploration capability of SQI and the exploitation potential of SOS have been combined to increase the robustness of the algorithm. This combination can improve the searching capability of the algorithm for attaining the global optimum. Numerical analysis is also done using dynamic modules of PLAXIS-8.6v for the validation of this analytical solution. The results obtained from the present analysis using HSOS are thoroughly compared with the existing available literature and also with the other optimization techniques. The significance of the present methodology to analyze the bearing capacity is discussed, and the acceptability of HSOS technique is justified to solve such type of engineering problems.
Kenneth Edgar Hernandez-Ruiz
2016-01-01
Full Text Available Modular production and component commonality are two widely used strategies in the manufacturing industry to meet customers growing needs for customized products. Using these strategies, companies can enhance their performance to achieve optimal safety stock levels. Despite the importance of safety stocks in business competition, little attention has been paid to the way to reduce them without affecting the customer service levels. This paper develops a mathematical model to reduce safety stock levels in organizations that employ modular production. To construct the model, we take advantage of the benefits of aggregate inventories, standardization of components, component commonality, and Group Technology philosophy in regard to stock levels. The model is tested through the simulation of three years of operation of two modular product systems. For each system, we calculated and compared the safety stock levels for two cases: (1 under the only presence of component commonality and (2 under the presence of both component commonality and Group Technology philosophy. The results show a reduction in safety stock levels when we linked the component commonality with the Group Technology philosophy. The paper presents a discussion of the implications of each case, features of the model, and suggestions for future research.
Kirshen, P. H.; Hecht, J. S.; Vogel, R. M.
2015-12-01
Prescribing long-term urban floodplain management plans under the deep uncertainty of climate change is a challenging endeavor. To address this, we have implemented and tested with stakeholders a parsimonious multi-stage mixed integer programming (MIP) model that identifies the optimal time period(s) for implementing publicly and privately financed adaptation measures. Publicly funded measures include reach-scale flood barriers, flood insurance, and buyout programs to encourage property owners in flood-prone areas to retreat from the floodplain. Measures privately funded by property owners consist of property-scale floodproofing options, such as raising building foundations, as well as investments in flood insurance or retreat from flood-prone areas. The objective function to minimize the sum of flood control and damage costs in all planning stages for different property types during floods of different severities. There are constraints over time for flow mass balances, construction of flood management alternatives and their cumulative implementation, budget allocations, and binary decisions. Damages are adjusted for flood control investments. In recognition of the deep uncertainty of GCM-derived climate change scenarios, we employ the minimax regret criterion to identify adaptation portfolios robust to different climate change trajectories. As an example, we identify publicly and privately funded adaptation measures for a stylized community based on the estuarine community of Exeter, New Hampshire, USA. We explore the sensitivity of recommended portfolios to different ranges of climate changes, and costs associated with economies of scale and flexible infrastructure design as well as different municipal budget constraints.
H. Shayeghi
2016-12-01
Full Text Available Microgrids is an new opportunity to reduce the total costs of power generation and supply the energy demands through small-scale power plants such as wind sources, photo voltaic panels, battery banks, fuel cells, etc. Like any power system in micro grid (MG, an unexpected faults or load shifting leads to frequency oscillations. Hence, this paper employs an adaptive fuzzy P-PID controller for frequency control of microgrid and a modified multi objective Chaotic Gravitational Search Algorithm (CGSA in order to find out the optimal setting parameters of the proposed controller. To provide a robust controller design, two non-commensurable objective functions are formulated based on eigenvalues-domain and time-domain and multi objective CGSA algorithm is used to solve them. Moreover, a fuzzy decision method is applied to extract the best and optimal Pareto fronts. The proposed controller is carried out on a MG system under different loading conditions with wind turbine generators, photovoltaic system, flywheel energy, battery storages, diesel generator and electrolyzer. The simulation results revealed that the proposed controller is more stable in comparison with the classical and other types of fuzzy controller.
Oshmarin, D.; Sevodina, N.; Iurlov, M.; Iurlova, N.
2017-06-01
In this paper, with the aim of providing passive control of structure vibrations a new approach has been proposed for selecting optimal parameters of external electric shunt circuits connected to piezoelectric elements located on the surface of the structure. The approach is based on the mathematical formulation of the natural vibration problem. The results of solution of this problem are the complex eigenfrequencies, the real part of which represents the vibration frequency and the imaginary part corresponds to the damping ratio, characterizing the rate of damping. A criterion of search for optimal parameters of the external passive shunt circuits, which can provide the system with desired dissipative properties, has been derived based on the analysis of responses of the real and imaginary parts of different complex eigenfrequencies to changes in the values of the parameters of the electric circuit. The efficiency of this approach has been verified in the context of natural vibration problem of rigidly clamped plate and semi-cylindrical shell, which is solved for series-connected and parallel -connected external resonance (consisting of resistive and inductive elements) R-L circuits. It has been shown that at lower (more energy-intensive) frequencies, a series-connected external circuit has the advantage of providing lower values of the circuit parameters, which renders it more attractive in terms of practical applications.
Sri Kuning Retno Dewandini
2016-07-01
Full Text Available Leadership is one important aspect in the sucess of the group. The leader of a group have a role as determinat sucess of the group. Nothing a leader, a group will only run without direction and purpose. But the sucess of a group is not only determined by a leader, but also it determined by his followers as well as a supportive environment. Gisik Pranaji farmer groups in the Bugel village of Panjatan Kulon Progo Regency able to survive due to the role of farmer group. To sustain these group need the support of the parties involved in it, including all the members of farmers and the environment.
Najmeh Sadat Jaddi
Full Text Available Artificial neural networks (ANNs have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
Sousa, Sergio H.G. de; Madeira, Marcelo G. [Halliburton Servicos Ltda., Rio de Janeiro, RJ (Brazil)
2008-07-01
In the classical operations research arena, there is the notion that the search for optimized solutions in continuous solution spaces is easier than on discrete solution spaces, even when the latter is a subset of the first. On the upstream oil industry, there is an additional complexity in the optimization problems because there usually are no analytical expressions for the objective function, which require some form of simulation in order to be evaluated. Thus, the use of meta heuristic optimizers like scatter search, tabu search and genetic algorithms is common. In this meta heuristic context, there are advantages in transforming continuous solution spaces in equivalent discrete ones; the goal to do so usually is to speed up the search for optimized solutions. However, these advantages can be masked when the problem has restrictions formed by linear combinations of its decision variables. In order to study these aspects of meta heuristic optimization, two optimization problems are proposed and solved with both continuous and discrete solution spaces: assisted history matching and injection rates optimization. Both cases operate on a model of the Wytch Farm onshore oil filed located in England. (author)
Complete local search with memory
Ghosh, D.; Sierksma, G.
2000-01-01
Neighborhood search heuristics like local search and its variants are some of the most popular approaches to solve discrete optimization problems of moderate to large size. Apart from tabu search, most of these heuristics are memoryless. In this paper we introduce a new neighborhood search heuristic
Xiaomin Xu
2015-11-01
Full Text Available The uncertainty and regularity of wind power generation are caused by wind resources’ intermittent and randomness. Such volatility brings severe challenges to the wind power grid. The requirements for ultrashort-term and short-term wind power forecasting with high prediction accuracy of the model used, have great significance for reducing the phenomenon of abandoned wind power , optimizing the conventional power generation plan, adjusting the maintenance schedule and developing real-time monitoring systems. Therefore, accurate forecasting of wind power generation is important in electric load forecasting. The echo state network (ESN is a new recurrent neural network composed of input, hidden layer and output layers. It can approximate well the nonlinear system and achieves great results in nonlinear chaotic time series forecasting. Besides, the ESN is simpler and less computationally demanding than the traditional neural network training, which provides more accurate training results. Aiming at addressing the disadvantages of standard ESN, this paper has made some improvements. Combined with the complementary advantages of particle swarm optimization and tabu search, the generalization of ESN is improved. To verify the validity and applicability of this method, case studies of multitime scale forecasting of wind power output are carried out to reconstruct the chaotic time series of the actual wind power generation data in a certain region to predict wind power generation. Meanwhile, the influence of seasonal factors on wind power is taken into consideration. Compared with the classical ESN and the conventional Back Propagation (BP neural network, the results verify the superiority of the proposed method.
M. K. Sharbatdar
2016-11-01
Full Text Available Abstract The appropriate planning and scheduling for reaching the project goals in the most economical way is the very basic issue of the project management. As in each project, the project manager must determine the required activities for the implementation of the project and select the best option in the implementation of each of the activities, in a way that the least final cost and time of the project is achieved. Considering the number of activities and selecting options for each of the activities, usually the selection has not one unique solution, but it consists of a set of solutions that are not preferred to each other and are known as Pareto solutions. On the other hand, in some actual projects, there are activities that their implementation options depend on the implementation of the prerequisite activity and are not applicable using all the implementation options, and even in some cases the implementation or the non-implementation of some activities are also dependent on the prerequisite activity implementation. These projects can be introduced as conditional projects. Much researchs have been conducted for acquiring Pareto solution set, using different methods and algorithms, but in all the done tasks the time-cost optimization of conditional projects is not considered. Thus, in the present study the concept of conditional network is defined along with some practical examples, then an appropriate way to illustrate these networks and suitable time-cost formulation of these are presented. Finally, for some instances of conditional activity networks, conditional project time-cost optimization conducted multi-objectively using known meta-heuristic algorithms such as multi-objective genetic algorithm, multi-objective particle swarm algorithm and multi-objective charged system search algorithm.
BO AN LEE
2014-02-01
Full Text Available An electrical resistance tomography (ERT technique combining the particle swarm optimization (PSO algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes. Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.
Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin
2014-01-01
An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images
Lee, Bo An; Kim, Bong Seok; Ko, Min Seok; Kim, Kyung Young; Kim, Sin [Jeju National Univ., Jeju (Korea, Republic of)
2014-02-15
An electrical resistance tomography (ERT) technique combining the particle swarm optimization (PSO) algorithm with the Gauss-Newton method is applied to the visualization of two-phase flows. In the ERT, the electrical conductivity distribution, namely the conductivity values of pixels (numerical meshes) comprising the domain in the context of a numerical image reconstruction algorithm, is estimated with the known injected currents through the electrodes attached on the domain boundary and the measured potentials on those electrodes. In spite of many favorable characteristics of ERT such as no radiation, low cost, and high temporal resolution compared to other tomography techniques, one of the major drawbacks of ERT is low spatial resolution due to the inherent ill-posedness of conventional image reconstruction algorithms. In fact, the number of known data is much less than that of the unknowns (meshes). Recalling that binary mixtures like two-phase flows consist of only two substances with distinct electrical conductivities, this work adopts the PSO algorithm for mesh grouping to reduce the number of unknowns. In order to verify the enhanced performance of the proposed method, several numerical tests are performed. The comparison between the proposed algorithm and conventional Gauss-Newton method shows significant improvements in the quality of reconstructed images.
Anzy Lee
2016-05-01
Full Text Available In this study, an artificial neural network (ANN model is developed to predict the stability number of breakwater armor stones based on the experimental data reported by Van der Meer in 1988. The harmony search (HS algorithm is used to determine the near-global optimal initial weights in the training of the model. The stratified sampling is used to sample the training data. A total of 25 HS-ANN hybrid models are tested with different combinations of HS algorithm parameters. The HS-ANN models are compared with the conventional ANN model, which uses a Monte Carlo simulation to determine the initial weights. Each model is run 50 times and the statistical analyses are conducted for the model results. The present models using stratified sampling are shown to be more accurate than those of previous studies. The statistical analyses for the model results show that the HS-ANN model with proper values of HS algorithm parameters can give much better and more stable prediction than the conventional ANN model.
Ayse T. Daloglu
2018-01-01
Full Text Available Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS and teaching-learning-based optimization (TLBO algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped bracing types are considered in the study. Optimum solutions of examples are carried out by a computer program coded in MATLAB interacting with SAP2000-OAPI for two-way data exchange. The stress constraints according to AISC-ASD (American Institute of Steel Construction-Allowable Stress Design, maximum lateral displacement constraints, interstorey drift constraints, and beam-to-column connection constraints are taken into consideration in the optimum design process. The parameters of the foundation model are calculated depending on soil surface displacements by using an iterative approach. The results obtained in the study show that bracing types and soil-structure interaction play very important roles in the optimum design of steel space frames. Finally, the techniques used in the optimum design seem to be quite suitable for practical applications.
Guidance and search help resource listing examples of common queries that can be used in the Google Search Appliance search request, including examples of special characters, or query term seperators that Google Search Appliance recognizes.
Sims, David W.
2015-09-01
The seminal papers by Viswanathan and colleagues in the late 1990s [1,2] proposed not only that scale-free, superdiffusive Lévy walks can describe the free-ranging movement patterns observed in animals such as the albatross [1], but that the Lévy walk was optimal for searching for sparsely and randomly distributed resource targets [2]. This distinct advantage, now shown to be present over a much broader set of conditions than originally theorised [3], implied that the Lévy walk is a search strategy that should be found very widely in organisms [4]. In the years since there have been several influential empirical studies showing that Lévy walks can indeed be detected in the movement patterns of a very broad range of taxa, from jellyfish, insects, fish, reptiles, seabirds, humans [5-10], and even in the fossilised trails of extinct invertebrates [11]. The broad optimality and apparent deep evolutionary origin of movement (search) patterns that are well approximated by Lévy walks led to the development of the Lévy flight foraging (LFF) hypothesis [12], which states that "since Lévy flights and walks can optimize search efficiencies, therefore natural selection should have led to adaptations for Lévy flight foraging".
Qingyang Zhang
2015-02-01
Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
Mastering Search Analytics Measuring SEO, SEM and Site Search
Chaters, Brent
2011-01-01
Many companies still approach Search Engine Optimization (SEO) and paid search as separate initiatives. This in-depth guide shows you how to use these programs as part of a comprehensive strategy-not just to improve your site's search rankings, but to attract the right people and increase your conversion rate. Learn how to measure, test, analyze, and interpret all of your search data with a wide array of analytic tools. Gain the knowledge you need to determine the strategy's return on investment. Ideal for search specialists, webmasters, and search marketing managers, Mastering Search Analyt
Huffman, Jeff C; Boehm, Julia K; Beach, Scott R; Beale, Eleanor E; DuBois, Christina M; Healy, Brian C
2016-06-01
Optimism has been associated with reduced suicidal ideation, but there have been few studies in patients at high suicide risk. We analyzed data from three study populations (total N = 319) with elevated risk of suicide: (1) patients with a recent acute cardiovascular event, (2) patients hospitalized for heart disease who had depression or an anxiety disorder, and (3) patients psychiatrically hospitalized for suicidal ideation or following a suicide attempt. For each study we analyzed the association between optimism (measured by the Life-Orientation Test-Revised) and suicidal ideation, and then completed an exploratory random effects meta-analysis of the findings to synthesize this data. The meta-analysis of the three studies showed that higher levels of self-reported optimism were associated with a lower likelihood of suicidal ideation (odds ratio [OR] = .89, 95% confidence interval [CI] = .85-.95, z = 3.94, p optimism (OR = .84, 95% CI = .76-.92, z = 3.57, p optimism may be associated with a lower risk of suicidal ideation, above and beyond the effects of depressive symptoms, for a wide range of patients with clinical conditions that place them at elevated risk for suicide. Copyright © 2016 Elsevier Ltd. All rights reserved.
Flax, Valerie L; Ibrahim, Alawiyatu Usman; Negerie, Mekebeb; Yakubu, Danjuma; Leatherman, Sheila; Bentley, Margaret E
2017-01-01
As part of a breastfeeding promotion intervention trial in Nigeria, we provided one cell phone per group of 5-7 microcredit clients and instructed the group's cell phone recipient to share weekly breastfeeding voice and text messages with group members. We measured the feasibility and acceptability of using group cell phones by conducting semi-structured exit interviews with 195 microcredit clients whose babies were born during the intervention (target group), in-depth interviews with eight phone recipients and nine non-phone recipients, and 16 focus group discussions with other microcredit clients. Women in the target group said the group phone worked well or very well (64%). They were motivated to try the recommended practices because they trusted the information (58%) and had support from others (35%). Approximately 44% of target women reported that their groups met and shared messages at least once a week. Women in groups that met at least weekly had higher odds of exclusive breastfeeding up to 6 months (OR 5.6, 95% CI 1.6, 19.7) than women in groups that never met. In-depth interviews and focus group discussions indicated that non-phone recipients had positive feelings towards phone recipients, the group phone met participants' needs, and messages were often shared outside the group. In conclusion, group cell phone messaging to promote breastfeeding among microcredit clients is feasible and acceptable and can be part of an effective behaviour change package. © 2016 John Wiley & Sons Ltd.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
O’Connor, D; Nguyen, D; Voronenko, Y; Yin, W; Sheng, K
2016-01-01
Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA
Stochastic and global optimization
Dzemyda, Gintautas; Šaltenis, Vydūnas; Zhilinskas, A; Mockus, Jonas
2002-01-01
... and Effectiveness of Controlled Random Search E. M. T. Hendrix, P. M. Ortigosa and I. García 129 9. Discrete Backtracking Adaptive Search for Global Optimization B. P. Kristinsdottir, Z. B. Zabinsky and...
Sano, Tomonari; Matsutani, Hideyuki; Kondo, Takeshi; Fujimoto, Shinichiro; Sekine, Takako; Arai, Takehiro; Morita, Hitomi; Takase, Shinichi
2011-01-01
The purpose of this study is to elucidate the relationship among RR interval (RR), the optimal reconstruction phase, and adequate temporal resolution (TR) to obtain coronary CT angiography images of acceptable quality using 64-multi detector-row CT (MDCT) (Aquilion 64) of end-systolic reconstruction in 407 patients with high heart rates. Image quality was classified into 3 groups [rank A (excellent): 161, rank B (acceptable): 207, and rank C (unacceptable): 39 patients]. The optimal absolute phase (OAP) significantly correlated with RR [OAP (ms)=119-0.286 RR (ms), r=0.832, p<0.0001], and the optimal relative phase (ORP) also significantly correlated with RR [ORP (%)=62-0.023 RR (ms), r=0.656, p<0.0001], and the correlation coefficient of OAP was significantly (p<0.0001) higher than that of ORP. The OAP range (±2 standard deviation (SD)) in which it is highly possible to get a static image was from [119-0.286 RR (ms)-46] to [119-0.286 RR (ms)+46]. The TR was significantly different among ranks A (97±22 ms), B (111±31 ms) and C (135±34 ms). The TR significantly correlated with RR in ranks A (TR=-16+0.149 RR, r=0.767, p<0.0001), B (TR=-15+0.166 RR, r=0.646, p<0.0001), and C (TR=52+0.117 RR, r=0.425, p=0.0069). Rank C was distinguished from ranks A or B by linear discriminate analysis (TR=-46+0.21 RR), and the discriminate rate was 82.6%. In conclusion, both the OAP and adequate TR depend on RR, and the OAP range (±2 SD) can be calculated using the formula [119-0.286 RR (ms)-46] to [119-0.286 RR (ms) +46], and an adequate TR value would be less than (-46+0.21 RR). (author)