Evolutionary Schema of Modeling Based on Genetic Algorithms
Directory of Open Access Journals (Sweden)
Stacewicz Paweł
2015-03-01
Full Text Available In this paper, I propose a populational schema of modeling that consists of: (a a linear AFSV schema (with four basic stages of abstraction, formalization, simplification, and verification, and (b a higher-level schema employing the genetic algorithm (with partially random procedures of mutation, crossover, and selection. The basic ideas of the proposed solution are as follows: (1 whole populations of models are considered at subsequent stages of the modeling process, (2 successive populations are subjected to the activity of genetic operators and undergo selection procedures, (3 the basis for selection is the evaluation function of the genetic algorithm (this function corresponds to the model verification criterion and reflects the goal of the model. The schema can be applied to automate the modeling of the mind/brain by means of artificial neural networks: the structure of each network is modified by genetic operators, modified networks undergo a learning cycle, and successive populations of networks are verified during the selection procedure. The whole process can be automated only partially, because it is the researcher who defines the evaluation function of the genetic algorithm.
Directory of Open Access Journals (Sweden)
Sergei L Kosakovsky Pond
2009-11-01
Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance
Optimal Mixing Evolutionary Algorithms
D. Thierens (Dirk); P.A.N. Bosman (Peter); N. Krasnogor
2011-01-01
htmlabstractA key search mechanism in Evolutionary Algorithms is the mixing or juxtaposing of partial solutions present in the parent solutions. In this paper we look at the efficiency of mixing in genetic algorithms (GAs) and estimation-of-distribution algorithms (EDAs). We compute the mixing
Evolutionary pattern search algorithms
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
Industrial Applications of Evolutionary Algorithms
Sanchez, Ernesto; Tonda, Alberto
2012-01-01
This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the
Model based development of engine control algorithms
Dekker, H.J.; Sturm, W.L.
1996-01-01
Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed
A Hybrid Chaotic Quantum Evolutionary Algorithm
DEFF Research Database (Denmark)
Cai, Y.; Zhang, M.; Cai, H.
2010-01-01
A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form...... and enhance the global search ability. A large number of tests show that the proposed algorithm has higher convergence speed and better optimizing ability than quantum evolutionary algorithm, real-coded quantum evolutionary algorithm and hybrid quantum genetic algorithm. Tests also show that when chaos...... is introduced to quantum evolutionary algorithm, the hybrid chaotic search strategy is superior to the carrier chaotic strategy, and has better comprehensive performance than the chaotic mutation strategy in most of cases. Especially, the proposed algorithm is the only one that has 100% convergence rate in all...
Hybridizing Evolutionary Algorithms with Opportunistic Local Search
DEFF Research Database (Denmark)
Gießen, Christian
2013-01-01
There is empirical evidence that memetic algorithms (MAs) can outperform plain evolutionary algorithms (EAs). Recently the first runtime analyses have been presented proving the aforementioned conjecture rigorously by investigating Variable-Depth Search, VDS for short (Sudholt, 2008). Sudholt...
Algorithmic Mechanism Design of Evolutionary Computation.
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.
Evolutionary algorithms for mobile ad hoc networks
Dorronsoro, Bernabé; Danoy, Grégoire; Pigné, Yoann; Bouvry, Pascal
2014-01-01
Describes how evolutionary algorithms (EAs) can be used to identify, model, and minimize day-to-day problems that arise for researchers in optimization and mobile networking. Mobile ad hoc networks (MANETs), vehicular networks (VANETs), sensor networks (SNs), and hybrid networks—each of these require a designer’s keen sense and knowledge of evolutionary algorithms in order to help with the common issues that plague professionals involved in optimization and mobile networking. This book introduces readers to both mobile ad hoc networks and evolutionary algorithms, presenting basic concepts as well as detailed descriptions of each. It demonstrates how metaheuristics and evolutionary algorithms (EAs) can be used to help provide low-cost operations in the optimization process—allowing designers to put some “intelligence” or sophistication into the design. It also offers efficient and accurate information on dissemination algorithms topology management, and mobility models to address challenges in the ...
Automatic design of decision-tree algorithms with evolutionary algorithms.
Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A
2013-01-01
This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.
Termination Criteria in Evolutionary Algorithms: A Survey
DEFF Research Database (Denmark)
Ghoreishi, Newsha; Clausen, Anders; Jørgensen, Bo Nørregaard
2017-01-01
Over the last decades, evolutionary algorithms have been extensively used to solve multi-objective optimization problems. However, the number of required function evaluations is not determined by nature of these algorithms which is often seen as a drawback. Therefore, a robust and reliable termin...
A Clustal Alignment Improver Using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Rene; Fogel, Gary B.; Krink, Thimo
2002-01-01
Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Infrastructure system restoration planning using evolutionary algorithms
Corns, Steven; Long, Suzanna K.; Shoberg, Thomas G.
2016-01-01
This paper presents an evolutionary algorithm to address restoration issues for supply chain interdependent critical infrastructure. Rapid restoration of infrastructure after a large-scale disaster is necessary to sustaining a nation's economy and security, but such long-term restoration has not been investigated as thoroughly as initial rescue and recovery efforts. A model of the Greater Saint Louis Missouri area was created and a disaster scenario simulated. An evolutionary algorithm is used to determine the order in which the bridges should be repaired based on indirect costs. Solutions were evaluated based on the reduction of indirect costs and the restoration of transportation capacity. When compared to a greedy algorithm, the evolutionary algorithm solution reduced indirect costs by approximately 12.4% by restoring automotive travel routes for workers and re-establishing the flow of commodities across the three rivers in the Saint Louis area.
Warehouse Optimization Model Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Guofeng Qin
2013-01-01
Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.
Food processing optimization using evolutionary algorithms | Enitan ...
African Journals Online (AJOL)
Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the ...
Protein Structure Prediction with Evolutionary Algorithms
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.
1999-02-08
Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.
Turbopump Performance Improved by Evolutionary Algorithms
Oyama, Akira; Liou, Meng-Sing
2002-01-01
The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.
Automated Antenna Design with Evolutionary Algorithms
Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.
2006-01-01
Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to
Hybrid Microgrid Configuration Optimization with Evolutionary Algorithms
Lopez, Nicolas
This dissertation explores the Renewable Energy Integration Problem, and proposes a Genetic Algorithm embedded with a Monte Carlo simulation to solve large instances of the problem that are impractical to solve via full enumeration. The Renewable Energy Integration Problem is defined as finding the optimum set of components to supply the electric demand to a hybrid microgrid. The components considered are solar panels, wind turbines, diesel generators, electric batteries, connections to the power grid and converters, which can be inverters and/or rectifiers. The methodology developed is explained as well as the combinatorial formulation. In addition, 2 case studies of a single objective optimization version of the problem are presented, in order to minimize cost and to minimize global warming potential (GWP) followed by a multi-objective implementation of the offered methodology, by utilizing a non-sorting Genetic Algorithm embedded with a monte Carlo Simulation. The method is validated by solving a small instance of the problem with known solution via a full enumeration algorithm developed by NREL in their software HOMER. The dissertation concludes that the evolutionary algorithms embedded with Monte Carlo simulation namely modified Genetic Algorithms are an efficient form of solving the problem, by finding approximate solutions in the case of single objective optimization, and by approximating the true Pareto front in the case of multiple objective optimization of the Renewable Energy Integration Problem.
Flexible Ligand Docking Using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Rene
2003-01-01
The docking of ligands to proteins can be formulated as a computational problem where the task is to find the most favorable energetic conformation among the large space of possible protein–ligand complexes. Stochastic search methods such as evolutionary algorithms (EAs) can be used to sample large...... search spaces effectively and is one of the commonly used methods for flexible ligand docking. During the last decade, several EAs using different variation operators have been introduced, such as the ones provided with the AutoDock program. In this paper we evaluate the performance of different EA...... settings such as choice of variation operators, population size, and usage of local search. The comparison is performed on a suite of six docking problems previously used to evaluate the performance of search algorithms provided with the AutoDock program package. The results from our investigation confirm...
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis algorithm (MA) and the Strong Selection Weak Mutation (SSWM) algorithm.
Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.
Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J
2016-01-01
We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.
A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis
Directory of Open Access Journals (Sweden)
Zhiming Song
2015-01-01
Full Text Available As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m-1-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m-1-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper.
Wind farm optimization using evolutionary algorithms
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a
A Double Evolutionary Pool Memetic Algorithm for Examination Timetabling Problems
Directory of Open Access Journals (Sweden)
Yu Lei
2014-01-01
Full Text Available A double evolutionary pool memetic algorithm is proposed to solve the examination timetabling problem. To improve the performance of the proposed algorithm, two evolutionary pools, that is, the main evolutionary pool and the secondary evolutionary pool, are employed. The genetic operators have been specially designed to fit the examination timetabling problem. A simplified version of the simulated annealing strategy is designed to speed the convergence of the algorithm. A clonal mechanism is introduced to preserve population diversity. Extensive experiments carried out on 12 benchmark examination timetabling instances show that the proposed algorithm is able to produce promising results for the uncapacitated examination timetabling problem.
Model-based Bayesian signal extraction algorithm for peripheral nerves
Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.
2017-10-01
Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10–20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of
Evaluation of models generated via hybrid evolutionary algorithms ...
African Journals Online (AJOL)
2016-04-02
Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis concentrations ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of .... Principal component analysis (PCA) was carried out on the input dataset used for the model ...
DEFF Research Database (Denmark)
Li, Wuzhao; Wang, Lei; Cai, Xingjuan
2015-01-01
and affect each other in many ways. The relationships include competition, predation, parasitism, mutualism and pythogenesis. In this paper, we consider the five relationships between solutions to propose a co-evolutionary algorithm termed species co-evolutionary algorithm (SCEA). In SCEA, five operators...
A backtracking evolutionary algorithm for power systems
Directory of Open Access Journals (Sweden)
Chiou Ji-Pyng
2017-01-01
Full Text Available This paper presents a backtracking variable scaling hybrid differential evolution, called backtracking VSHDE, for solving the optimal network reconfiguration problems for power loss reduction in distribution systems. The concepts of the backtracking, variable scaling factor, migrating, accelerated, and boundary control mechanism are embedded in the original differential evolution (DE to form the backtracking VSHDE. The concepts of the backtracking and boundary control mechanism can increase the population diversity. And, according to the convergence property of the population, the scaling factor is adjusted based on the 1/5 success rule of the evolution strategies (ESs. A larger population size must be used in the evolutionary algorithms (EAs to maintain the population diversity. To overcome this drawback, two operations, acceleration operation and migrating operation, are embedded into the proposed method. The feeder reconfiguration of distribution systems is modelled as an optimization problem which aims at achieving the minimum loss subject to voltage and current constraints. So, the proper system topology that reduces the power loss according to a load pattern is an important issue. Mathematically, the network reconfiguration system is a nonlinear programming problem with integer variables. One three-feeder network reconfiguration system from the literature is researched by the proposed backtracking VSHDE method and simulated annealing (SA. Numerical results show that the perfrmance of the proposed method outperformed the SA method.
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm
Directory of Open Access Journals (Sweden)
Carolina Lagos
2014-01-01
Full Text Available Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP, the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.
Comparing evolutionary strategies on a biobjective cultural algorithm.
Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Soto, Ricardo; Rubio, José-Miguel; Paredes, Fernando
2014-01-01
Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.
Comparing Evolutionary Algorithms on Binary Constraint Satisfaction Problems
Craenen, B.G.W.; Eiben, A.E.; van Hemert, J.I.
2003-01-01
Constraint handling is not straightforward in evolutionary algorithms (EAs) since the usual search operators, mutation and recombination, are 'blind' to constraints. Nevertheless, the issue is highly relevant, for many challenging problems involve constraints. Over the last decade, numerous EAs for
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
Analog Circuit Design Optimization Based on Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Mansour Barari
2014-01-01
Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.
NEW SYMMETRIC ENCRYPTION SYSTEM BASED ON EVOLUTIONARY ALGORITHM
A. Mouloudi
2015-01-01
In this article, we present a new symmetric encryption system which is a combination of our ciphering evolutionary system SEC [1] and a new ciphering method called “fragmentation”. This latter allows the alteration of the appearance frequencies of characters from a given text. Our system has at its disposed two keys, the first one is generated by the evolutionary algorithm, the second one is generated after “fragmentation” part. Both of them are symmetric, session keys and strengt...
Evolutionary Algorithms For Neural Networks Binary And Real Data Classification
Directory of Open Access Journals (Sweden)
Dr. Hanan A.R. Akkar
2015-08-01
Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.
Evolutionary algorithms for the Vehicle Routing Problem with Time Windows
Bräysy, Olli; Dullaert, Wout; Gendreau, Michel
2004-01-01
This paper surveys the research on evolutionary algorithms for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW can be described as the problem of designing least cost routes from a single depot to a set of geographically scattered points. The routes must be designed in such a way
Efficient design of hybrid renewable energy systems using evolutionary algorithms
Energy Technology Data Exchange (ETDEWEB)
Bernal-Agustin, Jose L.; Dufo-Lopez, Rodolfo [Department of Electrical Engineering, University of Zaragoza, Calle Maria de Luna, 3. 50018 Zaragoza (Spain)
2009-03-15
This paper shows an exhaustive study that has obtained the best values for the control parameters of an evolutionary algorithm developed by the authors, which permits the efficient design and control of hybrid systems of electrical energy generation, obtaining good solutions but needing low computational effort. In particular, for this study, a complex photovoltaic (PV)-wind-diesel-batteries-hydrogen system has been considered. In order to appropriately evaluate the behaviour of the evolutionary algorithm, the global optimal solution has been obtained (the one in which total net present cost presents a minor value) by an enumerative method. Next, a large number of designs were created using the evolutionary algorithm and modifying the values of the parameters that control its functioning. Finally, from the obtained results, it has been possible to determine the size of the population, the number of generations, the ratios of crossing and mutation, as well as the type of mutation most suitable to assure a probability near 100% of obtaining the global optimal design using the evolutionary algorithm. (author)
Parameter Control in Evolutionary Algorithms: Trends and Challenges
Karafotias, Giorgos; Hoogendoorn, Mark; Eiben, A.E.
2015-01-01
More than a decade after the first extensive overview on parameter control, we revisit the field and present a survey of the state-of-the-art. We briefly summarize the development of the field and discuss existing work related to each major parameter or component of an evolutionary algorithm. Based
Model-based remote sensing algorithms for particulate organic ...
Indian Academy of Sciences (India)
based remote sensing algorithms for particulate organic carbon (POC) in the Northeastern Gulf of Mexico. Young Baek Son Wilford D Gardner Alexey V Mishonov Mary Jo Richardson. Volume 118 Issue 1 February 2009 pp 1-10 ...
Logit Model based Performance Analysis of an Optimization Algorithm
Hernández, J. A.; Ospina, J. D.; Villada, D.
2011-09-01
In this paper, the performance of the Multi Dynamics Algorithm for Global Optimization (MAGO) is studied through simulation using five standard test functions. To guarantee that the algorithm converges to a global optimum, a set of experiments searching for the best combination between the only two MAGO parameters -number of iterations and number of potential solutions, are considered. These parameters are sequentially varied, while increasing the dimension of several test functions, and performance curves were obtained. The MAGO was originally designed to perform well with small populations; therefore, the self-adaptation task with small populations is more challenging while the problem dimension is higher. The results showed that the convergence probability to an optimal solution increases according to growing patterns of the number of iterations and the number of potential solutions. However, the success rates slow down when the dimension of the problem escalates. Logit Model is used to determine the mutual effects between the parameters of the algorithm.
Variants of Evolutionary Algorithms for Real-World Applications
Weise, Thomas; Michalewicz, Zbigniew
2012-01-01
Evolutionary Algorithms (EAs) are population-based, stochastic search algorithms that mimic natural evolution. Due to their ability to find excellent solutions for conventionally hard and dynamic problems within acceptable time, EAs have attracted interest from many researchers and practitioners in recent years. This book “Variants of Evolutionary Algorithms for Real-World Applications” aims to promote the practitioner’s view on EAs by providing a comprehensive discussion of how EAs can be adapted to the requirements of various applications in the real-world domains. It comprises 14 chapters, including an introductory chapter re-visiting the fundamental question of what an EA is and other chapters addressing a range of real-world problems such as production process planning, inventory system and supply chain network optimisation, task-based jobs assignment, planning for CNC-based work piece construction, mechanical/ship design tasks that involve runtime-intense simulations, data mining for the predictio...
Improved Quantum-Inspired Evolutionary Algorithm for Engineering Design Optimization
Directory of Open Access Journals (Sweden)
Jinn-Tsong Tsai
2012-01-01
Full Text Available An improved quantum-inspired evolutionary algorithm is proposed for solving mixed discrete-continuous nonlinear problems in engineering design. The proposed Latin square quantum-inspired evolutionary algorithm (LSQEA combines Latin squares and quantum-inspired genetic algorithm (QGA. The novel contribution of the proposed LSQEA is the use of a QGA to explore the optimal feasible region in macrospace and the use of a systematic reasoning mechanism of the Latin square to exploit the better solution in microspace. By combining the advantages of exploration and exploitation, the LSQEA provides higher computational efficiency and robustness compared to QGA and real-coded GA when solving global numerical optimization problems with continuous variables. Additionally, the proposed LSQEA approach effectively solves mixed discrete-continuous nonlinear design optimization problems in which the design variables are integers, discrete values, and continuous values. The computational experiments show that the proposed LSQEA approach obtains better results compared to existing methods reported in the literature.
A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics
Directory of Open Access Journals (Sweden)
Shan Li
2014-01-01
Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.
A New Algorithm for Bi-Directional Evolutionary Structural Optimization
Huang, Xiaodong; Xie, Yi Min; Burry, Mark Cameron
In this paper, a new algorithm for bi-directional evolutionary structural optimization (BESO) is proposed. In the new BESO method, the adding and removing of material is controlled by a single parameter, i.e. the removal ratio of volume (or weight). The convergence of the iteration is determined by a performance index of the structure. It is found that the new BESO algorithm has many advantages over existing ESO and BESO methods in terms of efficiency and robustness. Several 2D and 3D examples of stiffness optimization problems are presented and discussed.
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
Efficient discrete cosine transform model-based algorithm for photoacoustic image reconstruction
Zhang, Yan; Wang, Yuanyuan; Zhang, Chen
2013-06-01
The model-based algorithm is an effective reconstruction method for photoacoustic imaging (PAI). Compared with the analytical reconstruction algorithms, the model-based algorithm is able to provide a more accurate and high-resolution reconstructed image. However, the relatively heavy computational complexity and huge memory storage requirement often impose restrictions on its applications. We incorporate the discrete cosine transform (DCT) in PAI reconstruction and establish a new photoacoustic model. With this new model, an efficient algorithm is proposed for PAI reconstruction. Relatively significant DCT coefficients of the measured signals are used to reconstruct the image. As a result, the calculation can be saved. The theoretical computation complexity of the proposed algorithm is figured out and it is proved that the proposed method is efficient in calculation. The proposed algorithm is also verified through the numerical simulations and in vitro experiments. Compared with former developed model-based methods, the proposed algorithm is able to provide an equivalent reconstruction with the cost of much less time. From the theoretical analysis and the experiment results, it would be concluded that the model-based PAI reconstruction can be accelerated by using the proposed algorithm, so that the practical applicability of PAI may be enhanced.
Wirelength Minimization in Partitioning and Floorplanning Using Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
I. Hameem Shanavas
2011-01-01
Full Text Available Minimizing the wirelength plays an important role in physical design automation of very large-scale integration (VLSI chips. The objective of wirelength minimization can be achieved by finding an optimal solution for VLSI physical design components like partitioning and floorplanning. In VLSI circuit partitioning, the problem of obtaining a minimum delay has prime importance. In VLSI circuit floorplanning, the problem of minimizing silicon area is also a hot issue. Reducing the minimum delay in partitioning and area in floorplanning helps to minimize the wirelength. The enhancements in partitioning and floorplanning have influence on other criteria like power, cost, clock speed, and so forth. Memetic Algorithm (MA is an Evolutionary Algorithm that includes one or more local search phases within its evolutionary cycle to obtain the minimum wirelength by reducing delay in partitioning and by reducing area in floorplanning. MA applies some sort of local search for optimization of VLSI partitioning and floorplanning. The algorithm combines a hierarchical design technique like genetic algorithm and constructive technique like Simulated Annealing for local search to solve VLSI partitioning and floorplanning problem. MA can quickly produce optimal solutions for the popular benchmark.
Optimal classification of standoff bioaerosol measurements using evolutionary algorithms
Nyhavn, Ragnhild; Moen, Hans J. F.; Farsund, Øystein; Rustad, Gunnar
2011-05-01
Early warning systems based on standoff detection of biological aerosols require real-time signal processing of a large quantity of high-dimensional data, challenging the systems efficiency in terms of both computational complexity and classification accuracy. Hence, optimal feature selection is essential in forming a stable and efficient classification system. This involves finding optimal signal processing parameters, characteristic spectral frequencies and other data transformations in large magnitude variable space, stating the need for an efficient and smart search algorithm. Evolutionary algorithms are population-based optimization methods inspired by Darwinian evolutionary theory. These methods focus on application of selection, mutation and recombination on a population of competing solutions and optimize this set by evolving the population of solutions for each generation. We have employed genetic algorithms in the search for optimal feature selection and signal processing parameters for classification of biological agents. The experimental data were achieved with a spectrally resolved lidar based on ultraviolet laser induced fluorescence, and included several releases of 5 common simulants. The genetic algorithm outperform benchmark methods involving analytic, sequential and random methods like support vector machines, Fisher's linear discriminant and principal component analysis, with significantly improved classification accuracy compared to the best classical method.
An Evolutionary Algorithm to Mine High-Utility Itemsets
Directory of Open Access Journals (Sweden)
Jerry Chun-Wei Lin
2015-01-01
Full Text Available High-utility itemset mining (HUIM is a critical issue in recent years since it can be used to reveal the profitable products by considering both the quantity and profit factors instead of frequent itemset mining (FIM of association rules (ARs. In this paper, an evolutionary algorithm is presented to efficiently mine high-utility itemsets (HUIs based on the binary particle swarm optimization. A maximal pattern (MP-tree strcutrue is further designed to solve the combinational problem in the evolution process. Substantial experiments on real-life datasets show that the proposed binary PSO-based algorithm has better results compared to the state-of-the-art GA-based algorithm.
Economic Modeling Using Evolutionary Algorithms: The Effect of a Binary Encoding of Strategies
L. Waltman (Ludo); N.J.P. van Eck (Nees Jan); R. Dekker (Rommert); U. Kaymak (Uzay)
2009-01-01
textabstractWe are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based
Evolutionary Algorithms for Boolean Functions in Diverse Domains of Cryptography.
Picek, Stjepan; Carlet, Claude; Guilley, Sylvain; Miller, Julian F; Jakobovic, Domagoj
2016-01-01
The role of Boolean functions is prominent in several areas including cryptography, sequences, and coding theory. Therefore, various methods for the construction of Boolean functions with desired properties are of direct interest. New motivations on the role of Boolean functions in cryptography with attendant new properties have emerged over the years. There are still many combinations of design criteria left unexplored and in this matter evolutionary computation can play a distinct role. This article concentrates on two scenarios for the use of Boolean functions in cryptography. The first uses Boolean functions as the source of the nonlinearity in filter and combiner generators. Although relatively well explored using evolutionary algorithms, it still presents an interesting goal in terms of the practical sizes of Boolean functions. The second scenario appeared rather recently where the objective is to find Boolean functions that have various orders of the correlation immunity and minimal Hamming weight. In both these scenarios we see that evolutionary algorithms are able to find high-quality solutions where genetic programming performs the best.
I-Ching Divination Evolutionary Algorithm and its Convergence Analysis.
Chen, C L Philip; Zhang, Tong; Chen, Long; Tam, Sik Chung
2017-01-01
An innovative simulated evolutionary algorithm (EA), called I-Ching divination EA (IDEA), and its convergence analysis are proposed and investigated in this paper. Inherited from ancient Chinese culture, I-Ching divination has always been used as a divination system in traditional and modern China. There are three operators evolved from I-Ching transformations in this new optimization algorithm, intrication operator, turnover operator, and mutual operator. These new operators are very flexible in the evolution procedure. Additionally, two new spaces are defined in this paper, which are denoted as hexagram space and state space. In order to analyze the convergence property of I-Ching divination algorithm, Markov model was adopted to analyze the characters of the operators. Meanwhile, the proposed algorithm is proved to be a homogeneous Markov chain with the positive transition matrix. After giving some basic concepts of necessary theorems, definition of admissible functions and I-Ching map, a precise proof of the states converge to the global optimum is presented. Compared with the genetic algorithm, particle swarm optimization, and differential evolution algorithm, our proposed IDEA is much faster in reaching the global optimum.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
An evolutionary algorithm for a real vehicle routing problem
Directory of Open Access Journals (Sweden)
Adamidis, P.
2012-01-01
Full Text Available The NP-hard Vehicle Routing Problem (VRP is central in the optimisation of distribution networks. Its main objective is to determine a set of vehicle trips of minimum total cost. The ideal schedule will efficiently exploit the company's recourses, service all customers and satisfy the given (mainly daily constraints. There have been many attempts to solve this problem with conventional techniques but applied to small-scale simplified problems. This is due to the complexity of the problem and the large volume of data to be processed. Evolutionary Algorithms are search and optimization techniques that are capable of confronting that kind of problems and reach a good feasible solution in a reasonable period of time. In this paper we develop an Evolutionary Algorithm in order to solve the VRP of a specific transportation company in Volos, Greece with different vehicle capacities. The algorithm has been tested with different configurations and constraints, and proved to be effective in reaching a satisfying solution for the company's needs.
Directory of Open Access Journals (Sweden)
Haihua Zhu
2016-01-01
Full Text Available To make the optimal design of the multilink transmission mechanism applied in mechanical press, the intelligent optimization techniques are explored in this paper. A preference polyhedron model and new domination relationships evaluation methodology are proposed for the purpose of reaching balance among kinematic performance, dynamic performance, and other performances of the multilink transmission mechanism during the conceptual design phase. Based on the traditional evaluation index of single target of multicriteria design optimization, the robust metrics of the mechanism system and preference metrics of decision-maker are taken into consideration in this preference polyhedron model and reflected by geometrical characteristic of the model. At last, two optimized multilink transmission mechanisms are designed based on the proposed preference polyhedron model with different evolutionary algorithms, and the result verifies the validity of the proposed optimization method.
Swarm, genetic and evolutionary programming algorithms applied to multiuser detection
Directory of Open Access Journals (Sweden)
Paul Jean Etienne Jeszensky
2005-02-01
Full Text Available In this paper, the particles swarm optimization technique, recently published in the literature, and applied to Direct Sequence/Code Division Multiple Access systems (DS/CDMA with multiuser detection (MuD is analyzed, evaluated and compared. The Swarm algorithm efficiency when applied to the DS-CDMA multiuser detection (Swarm-MuD is compared through the tradeoff performance versus computational complexity, being the complexity expressed in terms of the number of necessary operations in order to reach the performance obtained through the optimum detector or the Maximum Likelihood detector (ML. The comparison is accomplished among the genetic algorithm, evolutionary programming with cloning and Swarm algorithm under the same simulation basis. Additionally, it is proposed an heuristics-MuD complexity analysis through the number of computational operations. Finally, an analysis is carried out for the input parameters of the Swarm algorithm in the attempt to find the optimum parameters (or almost-optimum for the algorithm applied to the MuD problem.
Françoise Benz
2004-01-01
ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on natural annealing processes or Evolutionary Computation, based on biological evolution processes. Geneti...
Physical Mapping Using Simulated Annealing and Evolutionary Algorithms
DEFF Research Database (Denmark)
Vesterstrøm, Jacob Svaneborg
2003-01-01
Physical mapping (PM) is a method of bioinformatics that assists in DNA sequencing. The goal is to determine the order of a collection of fragments taken from a DNA strand, given knowledge of certain unique DNA markers contained in the fragments. Simulated annealing (SA) is the most widely used o....... The analysis highlights the importance of a good PM model, a well-correlated fitness function, and high quality hybridization data. We suggest that future work in PM should focus on design of more reliable fitness functions and on developing error-screening algorithms....... optimization method when searching for an ordering of the fragments in PM. In this paper, we applied an evolutionary algorithm to the problem, and compared its performance to that of SA and local search on simulated PM data, in order to determine the important factors in finding a good ordering of the segments...
Evolving the Topology of Hidden Markov Models using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Réne
2002-01-01
Hidden Markov models (HMM) are widely used for speech recognition and have recently gained a lot of attention in the bioinformatics community, because of their ability to capture the information buried in biological sequences. Usually, heuristic algorithms such as Baum-Welch are used to estimate...... the model parameters. However, Baum-Welch has a tendency to stagnate on local optima. Furthermore, designing an optimal HMM topology usually requires a priori knowledge from a field expert and is usually found by trial-and-error. In this study, we present an evolutionary algorithm capable of evolving both...... the topology and the model parameters of HMMs. The applicability of the method is exemplified on a secondary structure prediction problem....
Using traveling salesman problem algorithms for evolutionary tree construction.
Korostensky, C; Gonnet, G H
2000-07-01
The construction of evolutionary trees is one of the major problems in computational biology, mainly due to its complexity. We present a new tree construction method that constructs a tree with minimum score for a given set of sequences, where the score is the amount of evolution measured in PAM distances. To do this, the problem of tree construction is reduced to the Traveling Salesman Problem (TSP). The input for the TSP algorithm are the pairwise distances of the sequences and the output is a circular tour through the optimal, unknown tree plus the minimum score of the tree. The circular order and the score can be used to construct the topology of the optimal tree. Our method can be used for any scoring function that correlates to the amount of changes along the branches of an evolutionary tree, for instance it could also be used for parsimony scores, but it cannot be used for least squares fit of distances. A TSP solution reduces the space of all possible trees to 2n. Using this order, we can guarantee that we reconstruct a correct evolutionary tree if the absolute value of the error for each distance measurement is smaller than f2.gif" BORDER="0">, where f3.gif" BORDER="0">is the length of the shortest edge in the tree. For data sets with large errors, a dynamic programming approach is used to reconstruct the tree. Finally simulations and experiments with real data are shown.
Bidirectional Dynamic Diversity Evolutionary Algorithm for Constrained Optimization
Directory of Open Access Journals (Sweden)
Weishang Gao
2013-01-01
Full Text Available Evolutionary algorithms (EAs were shown to be effective for complex constrained optimization problems. However, inflexible exploration-exploitation and improper penalty in EAs with penalty function would lead to losing the global optimum nearby or on the constrained boundary. To determine an appropriate penalty coefficient is also difficult in most studies. In this paper, we propose a bidirectional dynamic diversity evolutionary algorithm (Bi-DDEA with multiagents guiding exploration-exploitation through local extrema to the global optimum in suitable steps. In Bi-DDEA potential advantage is detected by three kinds of agents. The scale and the density of agents will change dynamically according to the emerging of potential optimal area, which play an important role of flexible exploration-exploitation. Meanwhile, a novel double optimum estimation strategy with objective fitness and penalty fitness is suggested to compute, respectively, the dominance trend of agents in feasible region and forbidden region. This bidirectional evolving with multiagents can not only effectively avoid the problem of determining penalty coefficient but also quickly converge to the global optimum nearby or on the constrained boundary. By examining the rapidity and veracity of Bi-DDEA across benchmark functions, the proposed method is shown to be effective.
A multilevel evolutionary algorithm for optimizing numerical functions
Directory of Open Access Journals (Sweden)
Reza Akbari
2011-04-01
Full Text Available This is a study on the effects of multilevel selection (MLS theory in optimizing numerical functions. Based on this theory, a Multilevel Evolutionary Optimization algorithm (MLEO is presented. In MLEO, a species is subdivided in cooperative populations and then each population is subdivided in groups, and evolution occurs at two levels so called individual and group levels. A fast population dynamics occurs at individual level. At this level, selection occurs among individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection happens among groups of a population. The group level operators such as regrouping, migration, and extinction-colonization are applied among groups. In regrouping process, all the groups are mixed together and then new groups are formed. The migration process encourages an individual to leave its own group and move to one of its neighbour groups. In extinction-colonization process, a group is selected as extinct, and replaced by offspring of a colonist group. In order to evaluate MLEO, the proposed algorithms were used for optimizing a set of well known numerical functions. The preliminary results indicate that the MLEO theory has positive effect on the evolutionary process and provide an efficient way for numerical optimization.
Model-based fault diagnosis techniques design schemes, algorithms, and tools
Ding, Steven
2008-01-01
The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.
Improved multilayer OLED architecture using evolutionary genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Quirino, W.G. [LADOR - Laboratorio de Dispositivos Organicos, Dimat - Inmetro, Duque de Caxias, RJ (Brazil); Teixeira, K.C. [LADOR - Laboratorio de Dispositivos Organicos, Dimat - Inmetro, Duque de Caxias, RJ (Brazil); LOEM - Laboratorio de Optoeletronica Molecular, Physics Department, Pontifical Catholic University of Rio de Janeiro, 22453-900, Rio de Janeiro, RJ (Brazil); Legnani, C. [LADOR - Laboratorio de Dispositivos Organicos, Dimat - Inmetro, Duque de Caxias, RJ (Brazil); Calil, V.L. [LADOR - Laboratorio de Dispositivos Organicos, Dimat - Inmetro, Duque de Caxias, RJ (Brazil); LOEM - Laboratorio de Optoeletronica Molecular, Physics Department, Pontifical Catholic University of Rio de Janeiro, 22453-900, Rio de Janeiro, RJ (Brazil); Messer, B.; Neto, O.P. Vilela; Pacheco, M.A.C. [ICA - Laboratorio de Inteligencia Computacional Aplicada, Electrical Engineering Department, Pontifical Catholic University of Rio de Janeiro, 22451-900, Rio de Janeiro, RJ (Brazil); Cremona, M., E-mail: cremona@fis.puc-rio.b [LADOR - Laboratorio de Dispositivos Organicos, Dimat - Inmetro, Duque de Caxias, RJ (Brazil); LOEM - Laboratorio de Optoeletronica Molecular, Physics Department, Pontifical Catholic University of Rio de Janeiro, 22453-900, Rio de Janeiro, RJ (Brazil)
2009-12-31
Organic light-emitting diodes (OLEDs) constitute a new class of emissive devices, which present high efficiency and low voltage operation, among other advantages over current technology. Multilayer architecture (M-OLED) is generally used to optimize these devices, specially overcoming the suppression of light emission due to the exciton recombination near the metal layers. However, improvement in recombination, transport and charge injection can also be achieved by blending electron and hole transporting layers into the same one. Graded emissive region devices can provide promising results regarding quantum and power efficiency and brightness, as well. The massive number of possible model configurations, however, suggests that a search algorithm would be more suitable for this matter. In this work, multilayer OLEDs were simulated and fabricated using Genetic Algorithms (GAs) as evolutionary strategy to improve their efficiency. Genetic Algorithms are stochastic algorithms based on genetic inheritance and Darwinian strife to survival. In our simulations, it was assumed a 50 nm width graded region, divided into five equally sized layers. The relative concentrations of the materials within each layer were optimized to obtain the lower V/J{sup 0.5} ratio, where V is the applied voltage and J the current density. The best M-OLED architecture obtained by genetic algorithm presented a V/J{sup 0.5} ratio nearly 7% lower than the value reported in the literature. In order to check the experimental validity of the improved results obtained in the simulations, two M-OLEDs with different architectures were fabricated by thermal deposition in high vacuum environment. The results of the comparison between simulation and some experiments are presented and discussed.
Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.
Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S
2017-02-13
Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.
Application of evolutionary algorithm for cast iron latent heat identification
Directory of Open Access Journals (Sweden)
J. Mendakiewicz
2008-12-01
Full Text Available In the paper the cast iron latent heat in the form of two components corresponding to the solidification of austenite and eutectic phases is assumed. The aim of investigations is to estimate the values of austenite and eutectic latent heats on the basis of cooling curve at the central point of the casting domain. This cooling curve has been obtained both on the basis of direct problem solution as well as from the experiment. To solve such inverse problem the evolutionary algorithm (EA has been applied. The numerical computations have been done using the finite element method by means of commercial software MSC MARC/MENTAT. In the final part of the paper the examples of identification are shown.
Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm
Oyama, Akira; Liou, Meng-Sing
2001-01-01
A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.
Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm
Directory of Open Access Journals (Sweden)
O. G. Monahov
2014-01-01
Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously
A New DG Multiobjective Optimization Method Based on an Improved Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Wanxing Sheng
2013-01-01
Full Text Available A distribution generation (DG multiobjective optimization method based on an improved Pareto evolutionary algorithm is investigated in this paper. The improved Pareto evolutionary algorithm, which introduces a penalty factor in the objective function constraints, uses an adaptive crossover and a mutation operator in the evolutionary process and combines a simulated annealing iterative process. The proposed algorithm is utilized to the optimize DG injection models to maximize DG utilization while minimizing system loss and environmental pollution. A revised IEEE 33-bus system with multiple DG units was used to test the multiobjective optimization algorithm in a distribution power system. The proposed algorithm was implemented and compared with the strength Pareto evolutionary algorithm 2 (SPEA2, a particle swarm optimization (PSO algorithm, and nondominated sorting genetic algorithm II (NGSA-II. The comparison of the results demonstrates the validity and practicality of utilizing DG units in terms of economic dispatch and optimal operation in a distribution power system.
A hierarchical evolutionary algorithm for multiobjective optimization in IMRT.
Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark H
2010-09-01
The current inverse planning methods for intensity modulated radiation therapy (IMRT) are limited because they are not designed to explore the trade-offs between the competing objectives of tumor and normal tissues. The goal was to develop an efficient multiobjective optimization algorithm that was flexible enough to handle any form of objective function and that resulted in a set of Pareto optimal plans. A hierarchical evolutionary multiobjective algorithm designed to quickly generate a small diverse Pareto optimal set of IMRT plans that meet all clinical constraints and reflect the optimal trade-offs in any radiation therapy plan was developed. The top level of the hierarchical algorithm is a multiobjective evolutionary algorithm (MOEA). The genes of the individuals generated in the MOEA are the parameters that define the penalty function minimized during an accelerated deterministic IMRT optimization that represents the bottom level of the hierarchy. The MOEA incorporates clinical criteria to restrict the search space through protocol objectives and then uses Pareto optimality among the fitness objectives to select individuals. The population size is not fixed, but a specialized niche effect, domination advantage, is used to control the population and plan diversity. The number of fitness objectives is kept to a minimum for greater selective pressure, but the number of genes is expanded for flexibility that allows a better approximation of the Pareto front. The MOEA improvements were evaluated for two example prostate cases with one target and two organs at risk (OARs). The population of plans generated by the modified MOEA was closer to the Pareto front than populations of plans generated using a standard genetic algorithm package. Statistical significance of the method was established by compiling the results of 25 multiobjective optimizations using each method. From these sets of 12-15 plans, any random plan selected from a MOEA population had a 11.3% +/- 0
Françoise Benz
2004-01-01
ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 1, 2, 3 and 4 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Evolutionary Heuristic Optimization: Genetic Algorithms and Estimation of Distribution Algorithms V. Robles Forcada and M. Perez Hernandez / Univ. de Madrid, Spain In the real world, there exist a huge number of problems that require getting an optimum or near-to-optimum solution. Optimization can be used to solve a lot of different problems such as network design, sets and partitions, storage and retrieval or scheduling. On the other hand, in nature, there exist many processes that seek a stable state. These processes can be seen as natural optimization processes. Over the last 30 years several attempts have been made to develop optimization algorithms, which simulate these natural optimization processes. These attempts have resulted in methods such as Simulated Annealing, based on nat...
Taxon ordering in phylogenetic trees by means of evolutionary algorithms
Directory of Open Access Journals (Sweden)
Cerutti Francesco
2011-07-01
Full Text Available Abstract Background In in a typical "left-to-right" phylogenetic tree, the vertical order of taxa is meaningless, as only the branch path between them reflects their degree of similarity. To make unresolved trees more informative, here we propose an innovative Evolutionary Algorithm (EA method to search the best graphical representation of unresolved trees, in order to give a biological meaning to the vertical order of taxa. Methods Starting from a West Nile virus phylogenetic tree, in a (1 + 1-EA we evolved it by randomly rotating the internal nodes and selecting the tree with better fitness every generation. The fitness is a sum of genetic distances between the considered taxon and the r (radius next taxa. After having set the radius to the best performance, we evolved the trees with (λ + μ-EAs to study the influence of population on the algorithm. Results The (1 + 1-EA consistently outperformed a random search, and better results were obtained setting the radius to 8. The (λ + μ-EAs performed as well as the (1 + 1, except the larger population (1000 + 1000. Conclusions The trees after the evolution showed an improvement both of the fitness (based on a genetic distance matrix, then close taxa are actually genetically close, and of the biological interpretation. Samples collected in the same state or year moved close each other, making the tree easier to interpret. Biological relationships between samples are also easier to observe.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Game Theory-Inspired Evolutionary Algorithm for Global Optimization
Directory of Open Access Journals (Sweden)
Guanci Yang
2017-09-01
Full Text Available Many approaches that model specific intelligent behaviors perform excellently in solving complex optimization problems. Game theory is widely recognized as an important tool in many fields. This paper introduces a game theory-inspired evolutionary algorithm for global optimization (GameEA. A formulation to estimate payoff expectations is provided, which is a mechanism to make a player become a rational decision-maker. GameEA has one population (i.e., set of players and generates new offspring only through an imitation operator and a belief-learning operator. An imitation operator adopts learning strategies and actions from other players to improve its competitiveness and applies these strategies to future games where one player updates its chromosome by strategically copying segments of gene sequences from a competitor. Belief learning refers to models in which a player adjusts his/her strategies, behavior or chromosomes by analyzing the current history information to improve solution quality. Experimental results on various classes of problems show that GameEA outperforms the other four algorithms on stability, robustness, and accuracy.
Aguirre, Juan; Giannoula, Alexia; Minagawa, Taisuke; Funk, Lutz; Turon, Pau; Durduran, Turgut
2013-01-01
A model based reconstruction algorithm that exploits translational symmetries for photoacoustic microscopy to drastically reduce the memory cost is presented. The memory size needed to store the model matrix is independent of the number of acquisitions at different positions. This helps us to overcome one of the main limitations of previous algorithms. Furthermore, using the algebraic reconstruction technique and building the model matrix "on the fly", we have obtained fast reconstructions of simulated and experimental data on both two- and three-dimensional grids using a traditional dark field photoacoustic microscope and a standard personal computer.
Evaluation of a commercial Model Based Iterative reconstruction algorithm in computed tomography.
Paruccini, Nicoletta; Villa, Raffaele; Pasquali, Claudia; Spadavecchia, Chiara; Baglivi, Antonia; Crespi, Andrea
2017-09-01
Iterative reconstruction algorithms have been introduced in clinical practice to obtain dose reduction without compromising the diagnostic performance. To investigate the commercial Model Based IMR algorithm by means of patient dose and image quality, with standard Fourier and alternative metrics. A Catphan phantom, a commercial density phantom and a cylindrical water filled phantom were scanned both varying CTDIvol and reconstruction thickness. Images were then reconstructed with Filtered Back Projection and both statistical (iDose) and Model Based (IMR) Iterative reconstruction algorithms. Spatial resolution was evaluated with Modulation Transfer Function and Target Transfer Function. Noise reduction was investigated with Standard Deviation. Furthermore, its behaviour was analysed with 3D and 2D Noise Power Spectrum. Blur and Low Contrast Detectability were investigated. Patient dose indexes were collected and analysed. All results, related to image quality, have been compared to FBP standard reconstructions. Model Based IMR significantly improves Modulation Transfer Function with an increase between 12% and 64%. Target Transfer Function curves confirm this trend for high density objects, while Blur presents a sharpness reduction for low density details. Model Based IMR underlines a noise reduction between 44% and 66% and a variation in noise power spectrum behaviour. Low Contrast Detectability curves underline an averaged improvement of 35-45%; these results are compatible with an achievable reduction of 50% of CTDIvol. A dose reduction between 25% and 35% is confirmed by median values of CTDIvol. IMR produces an improvement in image quality and dose reduction. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Lala Febriana
2001-01-01
Full Text Available This research gives an alternative to build production schedule using Evolutionary Algorithm. The objective function is minimizing production makespan. Shortest Processing Time (SPT and Longest Processing Time (LPT methods are used as initial solution. The algorithm is implemented on house ware factory and the result show the final solution has makespan 26,74 % less than initial solution. Abstract in Bahasa Indonesia : Penelitian ini memberikan alternatif dalam menyusun suatu jadwal produksi dengan menggunakan Evolutionary Algorithm. Fungsi tujuan yang akan dicapai adalah meminimumkan makespan produksi. Metode Shortest Processing Time (SPT dan Longest Processing Time (LPT digunakan sebagai solusi awal. Algoritma ini kemudian diterapkan pada pabrik peralatan rumah tangga dan solusi akhir menunjukan Evolutionary Algorithm memberikan makespan 26.74% lebih kecil dibandingkan dengan solusi awal. Kata kunci: Evolutionary Algorithm, Penjadwalan.
Optimum Design of ThinWideband Multilayer Electromagnetic Shield Using Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
K. S. Kola
2017-05-01
Full Text Available This paper describes the method of optimum design of multilayer perforated electromagnetic shield using Evolutionary algorithms, namely Particle Swarm Optimization Algorithm (PSO and Genetic Algorithm (GA. Different parameters which are inherently conflicting in nature corresponds to the multilayer structure of the electromagnetic shields have been considered. The goal is to minimize the overall mass of the shield with respect to its shielding effectiveness and cost. Three different models are considered and synthesized using evolutionary algorithms. Numerical optimal results for each model using different algorithms are presented and compared with each other to establish the effectiveness of the proposed method of designing.
DEFF Research Database (Denmark)
Neumann, Frank; Witt, Carsten
2015-01-01
Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical...... combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...... effective in dynamically tracking changes made to the problem instance....
Marwati, Rini; Yulianti, Kartika; Pangestu, Herny Wulandari
2016-02-01
A fuzzy evolutionary algorithm is an integration of an evolutionary algorithm and a fuzzy system. In this paper, we present an application of a genetic algorithm to a fuzzy evolutionary algorithm to detect and to solve chromosomes conflict. A chromosome conflict is identified by existence of any two genes in a chromosome that has the same values as two genes in another chromosome. Based on this approach, we construct an algorithm to solve a lecture scheduling problem. Time codes, lecture codes, lecturer codes, and room codes are defined as genes. They are collected to become chromosomes. As a result, the conflicted schedule turns into chromosomes conflict. Built in the Delphi program, results show that the conflicted lecture schedule problem is solvable by this algorithm.
Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools
Ding, Steven X
2013-01-01
Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: · new material on fault isolation and identification, and fault detection in feedback control loops; · extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and · enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...
Strength Pareto Evolutionary Algorithm using Self-Organizing Data Analysis Techniques
Directory of Open Access Journals (Sweden)
Ionut Balan
2015-03-01
Full Text Available Multiobjective optimization is widely used in problems solving from a variety of areas. To solve such problems there was developed a set of algorithms, most of them based on evolutionary techniques. One of the algorithms from this class, which gives quite good results is SPEA2, method which is the basis of the proposed algorithm in this paper. Results from this paper are obtained by running these two algorithms on a flow-shop problem.
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
In recent years, optimization algorithms have received increasing attention by the research community as well as the industry. In the area of evolutionary computation (EC), inspiration for optimization algorithms originates in Darwin’s ideas of evolution and survival of the fittest. Such algorithms...... simulate an evolutionary process where the goal is to evolve solutions by means of crossover, mutation, and selection based on their quality (fitness) with respect to the optimization problem at hand. Evolutionary algorithms (EAs) are highly relevant for industrial applications, because they are capable...... optimization. In addition to general investigations in these areas, I introduce a number of algorithms and demonstrate their potential on real-world problems in system identification and control. Furthermore, I investigate dynamic optimization problems in the context of the three fundamental areas as well...
A hidden Markov model-based algorithm for identifying tumour subtype using array CGH data
Directory of Open Access Journals (Sweden)
Zhang Ke
2011-12-01
Full Text Available Abstract Background The recent advancement in array CGH (aCGH research has significantly improved tumor identification using DNA copy number data. A number of unsupervised learning methods have been proposed for clustering aCGH samples. Two of the major challenges for developing aCGH sample clustering are the high spatial correlation between aCGH markers and the low computing efficiency. A mixture hidden Markov model based algorithm was developed to address these two challenges. Results The hidden Markov model (HMM was used to model the spatial correlation between aCGH markers. A fast clustering algorithm was implemented and real data analysis on glioma aCGH data has shown that it converges to the optimal cluster rapidly and the computation time is proportional to the sample size. Simulation results showed that this HMM based clustering (HMMC method has a substantially lower error rate than NMF clustering. The HMMC results for glioma data were significantly associated with clinical outcomes. Conclusions We have developed a fast clustering algorithm to identify tumor subtypes based on DNA copy number aberrations. The performance of the proposed HMMC method has been evaluated using both simulated and real aCGH data. The software for HMMC in both R and C++ is available in ND INBRE website http://ndinbre.org/programs/bioinformatics.php.
Can't See the Forest: Using an Evolutionary Algorithm to Produce an Animated Artwork
Trist, Karen; Ciesielski, Vic; Barile, Perry
We describe an artist's journey of working with an evolutionary algorithm to create an artwork suitable for exhibition in a gallery. Software based on the evolutionary algorithm produces animations which engage the viewer with a target image slowly emerging from a random collection of greyscale lines. The artwork consists of a grid of movies of eucalyptus tree targets. Each movie resolves with different aesthetic qualities, tempo and energy. The artist exercises creative control by choice of target and values for evolutionary and drawing parameters.
Tydrichova, Magdalena
2017-01-01
In this project, various available multi-objective optimization evolutionary algorithms were compared considering their performance and distribution of solutions. The main goal was to select the most suitable algorithms for applications in cancer hadron therapy planning. For our purposes, a complex testing and analysis software was developed. Also, many conclusions and hypothesis have been done for the further research.
Maier, H.R.; Kapelan, Z.; Kasprzyk, J.; Kollat, J.; Matott, L.S.; Cunha, M.C.; Dandy, G.C.; Gibbs, M.S.; Keedwell, E.; Marchi, A.; Ostfeld, A.; Savic, D.; Solomatine, D.P.; Vrugt, J.A.; Zecchin, A.C.; Minsker, B.S.; Barbour, E.J.; Kuczera, G.; Pasha, F.; Castelletti, A.; Giuliani, M.; Reed, P.M.
2014-01-01
The development and application of evolutionary algorithms (EAs) and other metaheuristics for the optimisation of water resources systems has been an active research field for over two decades. Research to date has emphasized algorithmic improvements and individual applications in specific areas
Expanding from discrete Cartesian to permutation Gene-pool Optimal Mixing Evolutionary Algorithms
P.A.N. Bosman (Peter); N.H. Luong (Ngoc Hoang); D. Thierens (Dirk)
2016-01-01
textabstractThe recently introduced Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) family, which includes the Linkage Tree Genetic Algorithm (LTGA), has been shown to scale excellently on a variety of discrete, Cartesian-space, optimization problems. This paper shows that GOMEA can quite
DEFF Research Database (Denmark)
Vesterstrøm, Jacob Svaneborg; Thomsen, Rene
2004-01-01
Several extensions to evolutionary algorithms (EAs) and particle swarm optimization (PSO) have been suggested during the last decades offering improved performance on selected benchmark problems. Recently, another search heuristic termed differential evolution (DE) has shown superior performance...... outperforms the other algorithms. However, on two noisy functions, both DE and PSO were outperformed by the EA....
Performance comparison of some evolutionary algorithms on job shop scheduling problems
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
Acoustic Environments: Applying Evolutionary Algorithms for Sound based Morphogenesis
DEFF Research Database (Denmark)
Foged, Isak Worre; Pasold, Anke; Jensen, Mads Brath
2012-01-01
The research investigates the application of evolutionary computation in relation to sound based morphogenesis. It does so by using the Sabine equation for performance benchmark in the development of the spatial volume and refl ectors, effectively creating the architectural expression as a whole...
New advances in spatial network modelling: towards evolutionary algorithms
Reggiani, A; Nijkamp, P.; Sabella, E.
2001-01-01
This paper discusses analytical advances in evolutionary methods with a view towards their possible applications in the space-economy. For this purpose, we present a brief overview and illustration of models actually available in the spatial sciences which attempt to map the complex patterns of
Solution of optimal power flow using evolutionary-based algorithms
African Journals Online (AJOL)
Due to the drawbacks in classical methods, the artificial intelligence (AI) techniques have been introduced to solve the OPF problem. The AI-based optimization has become an important approach for determining the global optimal solution. One of the most important intelligent search techniques is called evolutionary ...
Interactive evolutionary algorithms and data mining for drug design
Lameijer, Eric Marcel Wubbo
2010-01-01
One of the main problems of drug design is that it is quite hard to discover compounds that have all the required properties to become a drug (efficacy against the disease, good biological availability, low toxicity). This thesis describes the use of data mining and interactive evolutionary
Comparison of evolutionary computation algorithms for solving bi ...
Indian Academy of Sciences (India)
are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary .... A task without any parent is called an entry task and a task without any child is called an exit task. In the Directed Acyclic ..... The Computer Journal 48(3): 300–314. Dongarra J J, Jeannot E, Saule E, Shi ...
A hybrid multi-objective evolutionary algorithm approach for ...
Indian Academy of Sciences (India)
V K MANUPATI
1 Department of Manufacturing, School of Mechanical Engineering, VIT University, Vellore, India. 2 Department of Industrial and Systems Engineering, The ... algorithm has been compared to that of multi-objective particle swarm optimization (MOPSO) and conventional non-dominated sorting genetic algorithm (CNSGA-II), ...
A Parameterised Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms.
Corus, Dogan; Lehre, Per Kristian; Neumann, Frank; Pourhassan, Mojgan
2016-01-01
Bi-level optimisation problems have gained increasing interest in the field of combinatorial optimisation in recent years. In this paper, we analyse the runtime of some evolutionary algorithms for bi-level optimisation problems. We examine two NP-hard problems, the generalised minimum spanning tree problem and the generalised travelling salesperson problem in the context of parameterised complexity. For the generalised minimum spanning tree problem, we analyse the two approaches presented by Hu and Raidl ( 2012 ) with respect to the number of clusters that distinguish each other by the chosen representation of possible solutions. Our results show that a (1+1) evolutionary algorithm working with the spanning nodes representation is not a fixed-parameter evolutionary algorithm for the problem, whereas the problem can be solved in fixed-parameter time with the global structure representation. We present hard instances for each approach and show that the two approaches are highly complementary by proving that they solve each other's hard instances very efficiently. For the generalised travelling salesperson problem, we analyse the problem with respect to the number of clusters in the problem instance. Our results show that a (1+1) evolutionary algorithm working with the global structure representation is a fixed-parameter evolutionary algorithm for the problem.
Probing the Structure of Kepler ZZ Ceti Stars with Full Evolutionary Models-based Asteroseismology
Romero, Alejandra D.; Córsico, A. H.; Castanheira, B. G.; De Gerónimo, F. C.; Kepler, S. O.; Koester, D.; Kawka, A.; Althaus, L. G.; Hermes, J. J.; Bonato, C.; Gianninas, A.
2017-12-01
We present an asteroseismological analysis of four ZZ Ceti stars observed with the Kepler spacecraft: GD 1212, SDSS J113655.17+040952.6, KIC 11911480, and KIC 4552982, based on a grid of full evolutionary models of DA white dwarf (WD) stars. We employ a grid of carbon–oxygen core models, characterized by a detailed and consistent chemical inner profile for the core and the envelope. In addition to the observed periods, we take into account other information from the observational data, such as amplitudes, rotational splittings, and period spacing, as well as photometry and spectroscopy. For each star, we present an asteroseismological model that closely reproduces their observed properties. The asteroseismological stellar mass and effective temperature of the target stars are (0.632+/- 0.027 {M}ȯ , 10737 ± 73 K) for GD 1212, (0.745+/- 0.007 {M}ȯ , 11110 ± 69 K) for KIC 4552982, (0.5480+/- 0.01 {M}ȯ , 12,721 ± 228 K) for KIC11911480, and (0.570+/- 0.01 {M}ȯ , 12,060 ± 300 K) for SDSS J113655.17+040952.6. In general, the asteroseismological values are in good agreement with the spectroscopy. For KIC 11911480 and SDSS J113655.17+040952.6 we derive a similar seismological mass, but the hydrogen envelope is an order of magnitude thinner for SDSS J113655.17+040952.6, which is part of a binary system and went through a common envelope phase.
Optimization of aeroelastic composite structures using evolutionary algorithms
Manan, A.; Vio, G. A.; Harmin, M. Y.; Cooper, J. E.
2010-02-01
The flutter/divergence speed of a simple rectangular composite wing is maximized through the use of different ply orientations. Four different biologically inspired optimization algorithms (binary genetic algorithm, continuous genetic algorithm, particle swarm optimization, and ant colony optimization) and a simple meta-modeling approach are employed statistically on the same problem set. In terms of the best flutter speed, it was found that similar results were obtained using all of the methods, although the continuous methods gave better answers than the discrete methods. When the results were considered in terms of the statistical variation between different solutions, ant colony optimization gave estimates with much less scatter.
An Evolutionary Algorithm to Generate Ellipsoid Detectors for Negative Selection
2005-03-21
it is usually destroyed through apoptosis . This process is easily mapped to an AIS. Each immature (newly generated) T-cell is compared to the self-MHC...poor-affinity B-cells die through apoptosis (cell death) in the germinal centers (the places where the B-cells are undergoing affinity maturation). A...Security and Privacy, 133–145. 1999. 95. Whitley, D., K. Mathias, S. Rana , and J. Dzubera. “Evaluating Evolutionary Algo- rithms”. Artificial Intelligence
Sutton, Andrew M; Neumann, Frank; Nallaperuma, Samadhi
2014-01-01
Parameterized runtime analysis seeks to understand the influence of problem structure on algorithmic runtime. In this paper, we contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We investigate the structural properties in TSP instances that influence the optimization process of evolutionary algorithms and use this information to bound their runtime. We analyze the runtime in dependence of the number of inner points k. In the first part of the paper, we study a [Formula: see text] EA in a strictly black box setting and show that it can solve the Euclidean TSP in expected time [Formula: see text] where A is a function of the minimum angle [Formula: see text] between any three points. Based on insights provided by the analysis, we improve this upper bound by introducing a mixed mutation strategy that incorporates both 2-opt moves and permutation jumps. This strategy improves the upper bound to [Formula: see text]. In the second part of the paper, we use the information gained in the analysis to incorporate domain knowledge to design two fixed-parameter tractable (FPT) evolutionary algorithms for the planar Euclidean TSP. We first develop a [Formula: see text] EA based on an analysis by M. Theile, 2009, "Exact solutions to the traveling salesperson problem by a population-based evolutionary algorithm," Lecture notes in computer science, Vol. 5482 (pp. 145-155), that solves the TSP with k inner points in [Formula: see text] generations with probability [Formula: see text]. We then design a [Formula: see text] EA that incorporates a dynamic programming step into the fitness evaluation. We prove that a variant of this evolutionary algorithm using 2-opt mutation solves the problem after [Formula: see text] steps in expectation with a cost of [Formula: see text] for each fitness evaluation.
EvoOligo: oligonucleotide probe design with multiobjective evolutionary algorithms.
Shin, Soo-Yong; Lee, In-Hee; Cho, Young-Min; Yang, Kyung-Ae; Zhang, Byoung-Tak
2009-12-01
Probe design is one of the most important tasks in successful deoxyribonucleic acid microarray experiments. We propose a multiobjective evolutionary optimization method for oligonucleotide probe design based on the multiobjective nature of the probe design problem. The proposed multiobjective evolutionary approach has several distinguished features, compared with previous methods. First, the evolutionary approach can find better probe sets than existing simple filtering methods with fixed threshold values. Second, the multiobjective approach can easily incorporate the user's custom criteria or change the existing criteria. Third, our approach tries to optimize the combination of probes for the given set of genes, in contrast to other tools that independently search each gene for qualifying probes. Lastly, the multiobjective optimization method provides various sets of probe combinations, among which the user can choose, depending on the target application. The proposed method is implemented as a platform called EvoOligo and is available for service on the web. We test the performance of EvoOligo by designing probe sets for 19 types of Human Papillomavirus and 52 genes in the Arabidopsis Calmodulin multigene family. The design results from EvoOligo are proven to be superior to those from well-known existing probe design tools, such as OligoArray and OligoWiz.
Topic Evolutionary Tweet Stream Clustering Algorithm and TCV Rank Summarization
National Research Council Canada - National Science Library
K.Selvaraj; S.Balaji
2015-01-01
... and more. our proposed work consists three components tweet stream clustering to cluster tweet using k-means cluster algorithm and second tweet cluster vector technique to generate rank summarization using...
Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm
2009-03-10
Algorithm ( MOHO ) with Automatic Switching 4 Two-Objective Hybrid Optimization with a Response Surface 12 Response Surfaces using Wavelet-Based Neural...optimization. Results presented in this report confirm that MOHO is one such optimization concept that works. Multi-dimensional response surfaces... MOHO ) With Automatic Switching Among Individual Search Algorithms The MOHO software [1,2,3] that was developed as a part of this effort is a high
A multiobjective evolutionary algorithm to find community structures based on affinity propagation
Shang, Ronghua; Luo, Shuang; Zhang, Weitong; Stolkin, Rustam; Jiao, Licheng
2016-07-01
Community detection plays an important role in reflecting and understanding the topological structure of complex networks, and can be used to help mine the potential information in networks. This paper presents a Multiobjective Evolutionary Algorithm based on Affinity Propagation (APMOEA) which improves the accuracy of community detection. Firstly, APMOEA takes the method of affinity propagation (AP) to initially divide the network. To accelerate its convergence, the multiobjective evolutionary algorithm selects nondominated solutions from the preliminary partitioning results as its initial population. Secondly, the multiobjective evolutionary algorithm finds solutions approximating the true Pareto optimal front through constantly selecting nondominated solutions from the population after crossover and mutation in iterations, which overcomes the tendency of data clustering methods to fall into local optima. Finally, APMOEA uses an elitist strategy, called ;external archive;, to prevent degeneration during the process of searching using the multiobjective evolutionary algorithm. According to this strategy, the preliminary partitioning results obtained by AP will be archived and participate in the final selection of Pareto-optimal solutions. Experiments on benchmark test data, including both computer-generated networks and eight real-world networks, show that the proposed algorithm achieves more accurate results and has faster convergence speed compared with seven other state-of-art algorithms.
DEFF Research Database (Denmark)
Wang, Yong; Cai, Zixing; Zhou, Yuren
2009-01-01
A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...
Motion Planning for Humanoid Robot Based on Hybrid Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Zhong Qiu-Bo
2010-09-01
Full Text Available In this paper, online gait control system is designed for walking-up-stairs movement according to the features of humanoid robot, the hybrid evolutionary approach based on neural network optimized by particle swarm is employed for the offline training of the movement process, and the optimal gait of the stability is generated. Additionally, through embedded monocular vision, on-site environmental information is collected as neural network input, so necessary joint trajectory is output for the movement. Simulations and experiment testify the efficiency of the method.
Dash, Subhransu; Panigrahi, Bijaya
2015-01-01
The book is a collection of high-quality peer-reviewed research papers presented in Proceedings of International Conference on Artificial Intelligence and Evolutionary Algorithms in Engineering Systems (ICAEES 2014) held at Noorul Islam Centre for Higher Education, Kumaracoil, India. These research papers provide the latest developments in the broad area of use of artificial intelligence and evolutionary algorithms in engineering systems. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.
Bioinspired evolutionary algorithm based for improving network coverage in wireless sensor networks.
Abbasi, Mohammadjavad; Bin Abd Latiff, Muhammad Shafie; Chizari, Hassan
2014-01-01
Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.
ITO-based evolutionary algorithm to solve traveling salesman problem
Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei
2014-03-01
In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.
Towards Automatic Controller Design using Multi-Objective Evolutionary Algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf
, as the foundation for achieving the desired goal. While working with the algorithm, some issues arose which limited the use of the algorithm for unknown problems. These issues included the relative scale of the used fitness functions and the distribution of solutions on the optimal Pareto front. Some work has...... previously been done in this area using methods based on relative angles, utility functions, and projections and that work is what is extended in this thesis in order to cover a wider range of problems. This allows the NSGA-II to be transformed into a "black-box" optimization tool, which can be used...
Energy Technology Data Exchange (ETDEWEB)
Almansouri, Hani [Purdue University; Johnson, Christi R [ORNL; Clayton, Dwight A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL
2017-01-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2017-02-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Westin, Robin; Alm Carlsson, Gudrun
2014-03-01
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
Evolutionary algorithm for automatic detection of blood vessel shapes
Kutics, Andrea
1996-04-01
Automatic detection of blood vessel shapes locating in the skin has a great diagnostic importance. In this work, an evolutionary approach operating on morphological operator and operation structures is proposed for the determination of the shape and network of blood vessels located in upper skin layers. A population of individuals comprising morphological structures is generated. A two-dimensional queue like data representation of individuals is applied in order to provide an appropriate representation of the connectivity constraints originated in the two dimensional nature of the structuring elements. Two-dimensional crossover and mutation type manipulation operations are carried out on selected elements of the population. Unlike the usual techniques, in our approach no constraints are used for background and smoothness as no matched filter or linear operator is applied. Also no a priori knowledge of the vessel shape is necessary due to the evolutionary method. Unlike the usual imaging techniques, that mainly use angiograms as input, in this work infrared filtered images taken by CCD camera are applied to investigate the blood vessels of broad skin areas. The method is implemented parallel on a lattice network of transputers resulting in a significantly decreased processing time compared to the usual techniques.
Optimal Scheduling for Retrieval Jobs in Double-Deep AS/RS by Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Kuo-Yang Wu
2013-01-01
Full Text Available We investigate the optimal scheduling of retrieval jobs for double-deep type Automated Storage and Retrieval Systems (AS/RS in the Flexible Manufacturing System (FMS used in modern industrial production. Three types of evolutionary algorithms, the Genetic Algorithm (GA, the Immune Genetic Algorithm (IGA, and the Particle Swarm Optimization (PSO algorithm, are implemented to obtain the optimal assignments. The objective is to minimize the working distance, that is, the shortest retrieval time travelled by the Storage and Retrieval (S/R machine. Simulation results and comparisons show the advantages and feasibility of the proposed methods.
Xie, Huayang; Zhang, Mengjie
Artificial Intelligence, volume 170, number 11, pages 953-983, 2006 published a paper titled "Backward-chaining evolutionary algorithm". It introduced two fitness evaluation saving algorithms which are built on top of standard tournament selection. One algorithm is named Efficient Macro-selection Evolutionary Algorithm (EMS-EA) and the other is named Backward-chaining EA (BC-EA). Both algorithms were claimed to be able to provide considerable fitness evaluation savings, and especially BC-EA was claimed to be much efficient for hard and complex problems which require very large populations. This paper provides an evaluation and analysis of the two algorithms in terms of the feasibility and capability of reducing the fitness evaluation cost. The evaluation and analysis results show that BC-EA would be able to provide computational savings in unusual situations where given problems can be solved by an evolutionary algorithm using a very small tournament size, or a large tournament size but a very large population and a very small number of generations. Other than that, the saving capability of BC-EA is the same as EMS-EA. Furthermore, the feasibility of BC-EA is limited because two important assumptions making it work hardly hold.
A Comparative Study between Migration and Pair-Swap on Quantum-Inspired Evolutionary Algorithm
Imabeppu, Takahiro; Ono, Satoshi; Morishige, Ryota; Kurose, Motoyoshi; Nakayama, Shigeru
Quantum-inspired Evolutionary Algorithm (QEA) has been proposed as one of stochastic algorithms of evolutionary computation instead of a quantum algorithm. The authors have proposed Quantum-inspired Evolutionary Algorithm based on Pair Swap (QEAPS), which uses pair swap operator and does not group individuals in order to simplify QEA and reduce parameters in QEA. QEA and QEAPS imitationally use quantum bits as genes and superposition states in quantum computation. QEAPS has shown better search performance than QEA on knapsack problem, while eliminating parameters about immigration intervals and number of groups. However, QEAPS still has a parameter in common with QEA, a rotation angle unit, which is uncommon among other evolutionary computation algorithms. The rotation angle unit deeply affects exploitation and exploration control in QEA, but it has been unclear how the parameter influences QEAPS to behave. This paper aims to show that QEAPS involves few parameters and even those parameters can be adjusted easily. Experimental results, in knapsack problem and number partitioning problem which have different characteristics, have shown that QEAPS is competitive with other metaheuristics in search performance, and that QEAPS is robust against the parameter configuration and problem characteristics.
A hybrid multi-objective evolutionary algorithm approach for ...
Indian Academy of Sciences (India)
The performance of the proposed multi-objective AI-NSGA-II algorithm has been compared to that of multi-objective particle swarm optimization (MOPSO) and ... Department of Manufacturing, School of Mechanical Engineering, VIT University, Vellore, India; Department of Industrial and Systems Engineering, The Hong Kong ...
Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency
Directory of Open Access Journals (Sweden)
M. K. Sakharov
2017-01-01
Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal
Multidistribution Center Location Based on Real-Parameter Quantum Evolutionary Clustering Algorithm
Directory of Open Access Journals (Sweden)
Huaixiao Wang
2014-01-01
Full Text Available To determine the multidistribution center location and the distribution scope of the distribution center with high efficiency, the real-parameter quantum-inspired evolutionary clustering algorithm (RQECA is proposed. RQECA is applied to choose multidistribution center location on the basis of the conventional fuzzy C-means clustering algorithm (FCM. The combination of the real-parameter quantum-inspired evolutionary algorithm (RQIEA and FCM can overcome the local search defect of FCM and make the optimization result independent of the choice of initial values. The comparison of FCM, clustering based on simulated annealing genetic algorithm (CSAGA, and RQECA indicates that RQECA has the same good convergence as CSAGA, but the search efficiency of RQECA is better than that of CSAGA. Therefore, RQECA is more efficient to solve the multidistribution center location problem.
Evolutionary Algorithms Performance Comparison For Optimizing Unimodal And Multimodal Test Functions
Directory of Open Access Journals (Sweden)
Dr. Hanan A.R. Akkar
2015-08-01
Full Text Available Many evolutionary algorithms have been presented in the last few decades some of these algorithms were sufficiently tested and used in many researches and papers such as Particle Swarm Optimization PSO Genetic Algorithm GA and Differential Evolution Algorithm DEA. Other recently proposed algorithms were unknown and rarely used such as Stochastic Fractal Search SFS Symbiotic Organisms Search SOS and Grey Wolf Optimizer GWO. This paper trying to made a fair comprehensive comparison for the performance of these well-known algorithms and other less prevalent and recently proposed algorithms by using a variety of famous test functions that have multiple different characteristics through applying two experiments for each algorithm according to the used test function the first experiments carried out with the standard search space limits of the proposed test functions while the second experiment multiple ten times the maximum and minimum limits of the test functions search space recording the Average Mean Absolute Error AMAE Overall Algorithm Efficiency OAE Algorithms Stability AS Overall Algorithm Stability OAS each algorithm required Average Processing Time APT and Overall successful optimized test function Processing Time OPT for both of the experiments and with ten epochs each with 100 iterations for each algorithm.
Evolutionary Image Enhancement Using Multi-Objective Genetic Algorithm
Dhirendra Pal Singh; Ashish Khare
2013-01-01
Image Processing is the art of examining, identifying and judging the significances of the Images. Image enhancement refers to attenuation, or sharpening, of image features such as edgels, boundaries, or contrast to make the processed image more useful for analysis. Image enhancement procedures utilize the computers to provide good and improved images for study by the human interpreters. In this paper we proposed a novel method that uses the Genetic Algorithm with Multi-objective criteria to ...
Pourbahman, Zahra; Hamzeh, Ali
2014-01-01
The large number of exact fitness function evaluations makes evolutionary algorithms to have computational cost. In some real-world problems, reducing number of these evaluations is much more valuable even by increasing computational complexity and spending more time. To fulfill this target, we introduce an effective factor, in spite of applied factor in Adaptive Fuzzy Fitness Granulation with Non-dominated Sorting Genetic Algorithm-II, to filter out worthless individuals more precisely. Our ...
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel
Energy-Efficient Scheduling Problem Using an Effective Hybrid Multi-Objective Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Lvjiang Yin
2016-12-01
Full Text Available Nowadays, manufacturing enterprises face the challenge of just-in-time (JIT production and energy saving. Therefore, study of JIT production and energy consumption is necessary and important in manufacturing sectors. Moreover, energy saving can be attained by the operational method and turn off/on idle machine method, which also increases the complexity of problem solving. Thus, most researchers still focus on small scale problems with one objective: a single machine environment. However, the scheduling problem is a multi-objective optimization problem in real applications. In this paper, a single machine scheduling model with controllable processing and sequence dependence setup times is developed for minimizing the total earliness/tardiness (E/T, cost, and energy consumption simultaneously. An effective multi-objective evolutionary algorithm called local multi-objective evolutionary algorithm (LMOEA is presented to tackle this multi-objective scheduling problem. To accommodate the characteristic of the problem, a new solution representation is proposed, which can convert discrete combinational problems into continuous problems. Additionally, a multiple local search strategy with self-adaptive mechanism is introduced into the proposed algorithm to enhance the exploitation ability. The performance of the proposed algorithm is evaluated by instances with comparison to other multi-objective meta-heuristics such as Nondominated Sorting Genetic Algorithm II (NSGA-II, Strength Pareto Evolutionary Algorithm 2 (SPEA2, Multiobjective Particle Swarm Optimization (OMOPSO, and Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D. Experimental results demonstrate that the proposed LMOEA algorithm outperforms its counterparts for this kind of scheduling problems.
Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Richard Lamb
2015-09-01
Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.
A kNN method that uses a non-natural evolutionary algorithm for ...
African Journals Online (AJOL)
This paper details an evolutionary algorithm that forms a new population by combining genes of three members of the current population. The first member is the best member of the population, the second one is the current member to be replaced and the third one is a member chosen randomly from the current population.
Synthesizing multi-objective H2/H-infinity dynamic controller using evolutionary algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf; Langballe, A.S.; Wisniewski, Rafal
This paper covers the design of an Evolutionary Algorithm (EA), which should be able to synthesize a mixed H2/H-infinity. It will be shown how a system can be expressed as Matrix Inequalities (MI) and these will then be used in the design of the EA. The main objective is to examine whether a mixed...
Synthesizing mixed H2/H-infinity dynamic controller using evolutionary algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf; Langballe, A.S.; Wisniewski, Rafal
2001-01-01
This paper covers the design of an Evolutionary Algorithm (EA), which should be able to synthesize a mixed H2/H-infinity. It will be shown how a system can be expressed as Matrix Inequalities (MI) and these will then be used in the design of the EA. The main objective is to examine whether a mixed...
Evolutionary Algorithms for the Detection of Structural Breaks in Time Series
DEFF Research Database (Denmark)
Doerr, Benjamin; Fischer, Paul; Hilbert, Astrid
2013-01-01
series under consideration is available. Therefore, a black-box optimization approach is our method of choice for detecting structural breaks. We describe a evolutionary algorithm framework which easily adapts to a large number of statistical settings. The experiments on artificial and real-world time...
Franca, PM; Gupta, JND; Mendes, AS; Moscato, P; Veltink, KJ
This paper considers the problem of scheduling part families and jobs within each part family in a flowshop manufacturing cell with sequence dependent family setups times where it is desired to minimize the makespan while processing parts (jobs) in each family together. Two evolutionary algorithms-a
Directory of Open Access Journals (Sweden)
Bogna MRÓWCZYŃSKA
2011-01-01
Full Text Available This paper describes an application of an evolutionary algorithm and an artificial immune systems to solve a problem of scheduling an optimal route for waste disposal garbage trucks in its daily operation. Problem of an optimisation is formulated and solved using both methods. The results are presented for an area in one of the Polish cities.
On the Impact of Mutation-Selection Balance on the Runtime of Evolutionary Algorithms
DEFF Research Database (Denmark)
Lehre, Per Kristian; Yao, Xin
2012-01-01
The interplay between mutation and selection plays a fundamental role in the behavior of evolutionary algorithms (EAs). However, this interplay is still not completely understood. This paper presents a rigorous runtime analysis of a non-elitist population-based EA that uses the linear ranking...
A Runtime Analysis of Parallel Evolutionary Algorithms in Dynamic Optimization
DEFF Research Database (Denmark)
Lissovoi, Andrei; Witt, Carsten
2017-01-01
A simple island model with (Formula presented.) islands and migration occurring after every (Formula presented.) iterations is studied on the dynamic fitness function Maze. This model is equivalent to a (Formula presented.) EA if (Formula presented.), i. e., migration occurs during every iteration....... It is proved that even for an increased offspring population size up to (Formula presented.), the (Formula presented.) EA is still not able to track the optimum of Maze. If the migration interval is chosen carefully, the algorithm is able to track the optimum even for logarithmic (Formula presented...
On Polymorphic Circuits and Their Design Using Evolutionary Algorithms
Stoica, Adrian; Zebulum, Ricardo; Keymeulen, Didier; Lohn, Jason; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces the concept of polymorphic electronics (polytronics) - referring to electronics with superimposed built-in functionality. A function change does not require switches/reconfiguration as in traditional approaches. Instead the change comes from modifications in the characteristics of devices involved in the circuit, in response to controls such as temperature, power supply voltage (VDD), control signals, light, etc. The paper illustrates polytronic circuits in which the control is done by temperature, morphing signals, and VDD respectively. Polytronic circuits are obtained by evolutionary design/evolvable hardware techniques. These techniques are ideal for the polytronics design, a new area that lacks design guidelines, know-how,- yet the requirements/objectives are easy to specify and test. The circuits are evolved/synthesized in two different modes. The first mode explores an unstructured space, in which transistors can be interconnected freely in any arrangement (in simulations only). The second mode uses a Field Programmable Transistor Array (FPTA) model, and the circuit topology is sought as a mapping onto a programmable architecture (these experiments are performed both in simulations and on FPTA chips). The experiments demonstrated the synthesis. of polytronic circuits by evolution. The capacity of storing/hiding "extra" functions provides for watermark/invisible functionality, thus polytronics may find uses in intelligence/security applications.
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
Real-time inverse kinematics for the upper limb: a model-based algorithm using segment orientations.
Borbély, Bence J; Szolgay, Péter
2017-01-17
Model based analysis of human upper limb movements has key importance in understanding the motor control processes of our nervous system. Various simulation software packages have been developed over the years to perform model based analysis. These packages provide computationally intensive-and therefore off-line-solutions to calculate the anatomical joint angles from motion captured raw measurement data (also referred as inverse kinematics). In addition, recent developments in inertial motion sensing technology show that it may replace large, immobile and expensive optical systems with small, mobile and cheaper solutions in cases when a laboratory-free measurement setup is needed. The objective of the presented work is to extend the workflow of measurement and analysis of human arm movements with an algorithm that allows accurate and real-time estimation of anatomical joint angles for a widely used OpenSim upper limb kinematic model when inertial sensors are used for movement recording. The internal structure of the selected upper limb model is analyzed and used as the underlying platform for the development of the proposed algorithm. Based on this structure, a prototype marker set is constructed that facilitates the reconstruction of model-based joint angles using orientation data directly available from inertial measurement systems. The mathematical formulation of the reconstruction algorithm is presented along with the validation of the algorithm on various platforms, including embedded environments. Execution performance tables of the proposed algorithm show significant improvement on all tested platforms. Compared to OpenSim's Inverse Kinematics tool 50-15,000x speedup is achieved while maintaining numerical accuracy. The proposed algorithm is capable of real-time reconstruction of standardized anatomical joint angles even in embedded environments, establishing a new way for complex applications to take advantage of accurate and fast model-based inverse
Congestion Relief of Contingent Power Network with Evolutionary Optimization Algorithm
Directory of Open Access Journals (Sweden)
Abhinandan De
2012-03-01
Full Text Available This paper presents a differential evolution optimization technique based methodology for congestion management cost optimization of contingent power networks. In Deregulated systems, line congestion apart from causing stability problems can increase the cost of electricity. Restraining line flow to a particular level of congestion is quite imperative from stability as well as economy point of view. Employing Congestion Sensitivity Index proposed in this paper, the algorithm proposed can be adopted for selecting the congested lines in a power networks and then to search for a congestion constrained optimal generation schedule at the cost of a minimum congestion management charge without any load curtailment and installation of FACTS devices. It has been depicted that the methodology on application can provide better operating conditions in terms of improvement of bus voltage and loss profile of the system. The efficiency of the proposed methodology has been tested on an IEEE 30 bus benchmark system and the results look promising.
SOLVING THE PROBLEM OF VEHICLE ROUTING BY EVOLUTIONARY ALGORITHM
Directory of Open Access Journals (Sweden)
Remigiusz Romuald Iwańkowicz
2016-03-01
Full Text Available In the presented work the vehicle routing problem is formulated, which concerns planning the collection of wastes by one garbage truck from a certain number of collection points. The garbage truck begins its route in the base point, collects the load in subsequent collection points, then drives the wastes to the disposal site (landfill or sorting plant and returns to the another visited collection points. The filled garbage truck each time goes to the disposal site. It returns to the base after driving wastes from all collection points. Optimization model is based on genetic algorithm where individual is the whole garbage collection plan. Permutation is proposed as the code of the individual.
Directory of Open Access Journals (Sweden)
Yongyi Shou
2014-01-01
Full Text Available A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.
The (1+λ) evolutionary algorithm with self-adjusting mutation rate
DEFF Research Database (Denmark)
Doerr, Benjamin; Witt, Carsten; Gießen, Christian
2017-01-01
is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the (1 + A) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the (1 + A) EA finds the optimum in an expected optimization......We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate...... time (number of fitness evaluations) of O(nA/log A + n log n). This time is asymptotically smaller than the optimization time of the classic (1 + A) EA. Previous work shows that this performance is best-possible among all A-parallel mutation-based unbiased black-box algorithms. This result shows...
Sounds unheard of evolutionary algorithms as creative tools for the contemporary composer
DEFF Research Database (Denmark)
Dahlstedt, Palle
2004-01-01
Evolutionary algorithms are studied as tools for generating novel musical material in the form of musical scores and synthesized sounds. The choice of genetic representation defines a space of potential music. This space is explored using evolutionary algorithms, in search of useful musical...... material. In this way, the computer becomes a creative tool that is integrated into the artistic process. Several implementations of these ideas are presented, based on interactive evolution, genetic algorithms and artificial life models. A number of different representations of music and sound...... are discussed, with a focus on parameterized sound synthesis and the representation of scores as recursively described trees. The dynamic behavior of a class of sound synthesis engines, consisting of networks of modulating oscillators, is also investigated. Finally, a number of compositions that have been...
Directory of Open Access Journals (Sweden)
J. L. Guardado
2014-01-01
Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.
Directory of Open Access Journals (Sweden)
Nurmaulidar Nurmaulidar
2015-04-01
Full Text Available Travelling Salesman Problem (TSP is one of complex optimization problem that is difficult to be solved, and require quite a long time for a large number of cities. Evolutionary algorithm is a precise algorithm used in solving complex optimization problem as it is part of heuristic method. Evolutionary algorithm, like many other algorithms, also experiences a premature convergence phenomenon, whereby variation is eliminated from a population of fairly fit individuals before a complete solution is achieved. Therefore it requires a method to delay the convergence. A specific method of fitness sharing called phenotype fitness sharing has been used in this research. The aim of this research is to find out whether fitness sharing in evolutionary algorithm is able to optimize TSP. There are two concepts of evolutionary algorithm being used in this research. the first one used single elitism and the other one used federated solution. The two concepts had been tested to the method of fitness sharing by using the threshold of 0.25, 0.50 and 0.75. The result was then compared to a non fitness sharing method. The result in this study indicated that by using single elitism concept, fitness sharing was able to give a more optimum result for the data of 100-1000 cities. On the other hand, by using federation solution concept, fitness sharing can yield a more optimum result for the data above 1000 cities, as well as a better solution of data-spreading compared to the method without fitness sharing.
Evolutionary algorithms for the optimal laser control of molecular orientation
Energy Technology Data Exchange (ETDEWEB)
Atabek, Osman [Laboratoire de Photophysique Moleculaire du CNRS, Batiment 213, Campus d' Orsay, 91405 Orsay (France); Dion, Claude M [CERMICS, Ecole Nationale des Ponts et Chaussees, 6 and 8, avenue Blaise Pascal, cite Descartes, Champs-sur-Marne, 77455 Marne-la-Vallee (France); Yedder, Adel Ben Haj [CERMICS, Ecole Nationale des Ponts et Chaussees, 6 and 8, avenue Blaise Pascal, cite Descartes, Champs-sur-Marne, 77455 Marne-la-Vallee (France)
2003-12-14
In terms of optimal control, laser-induced molecular orientation is an optimization problem involving a global minimum search on a multi-dimensional surface function of varying parameters characterizing the laser pulse (frequency, peak intensity, temporal shape). Genetic algorithms, aiming at the optimization of different possible targets, may temporarily be trapped in a local minimum, before reaching the global one. A careful study of such local (robust) minima provides a key for the thorough interpretation of the orientation dynamics, in terms of basic mechanisms. Two targets are retained: the first, simple, one searching for an angle between molecular and laser polarization axes as close as possible to zero (orientation) at a given time; the second, hybrid, one combining the efficiency of orientation with its duration. Their respective roles are illustrated referring to two molecular systems, HCN and LiF, taken at a rigid rotor approximation level. A sudden and asymmetric laser pulse (provided by a frequency {omega} superposed on its second harmonic 2{omega} leads to the kick mechanism. The result is a very fast (as compared to the rotational period) angular momentum transfer to the molecule, that turns out to be responsible for an efficient orientation after the laser pulse is turned off.
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Directory of Open Access Journals (Sweden)
Leilei Cao
2016-01-01
Full Text Available A Guiding Evolutionary Algorithm (GEA with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
Comprehensive Weighted Clique Degree Ranking Algorithms and Evolutionary Model of Complex Network
Directory of Open Access Journals (Sweden)
Xu Jie
2016-01-01
Full Text Available This paper analyses the degree ranking (DR algorithm, and proposes a new comprehensive weighted clique degree ranking (CWCDR algorithms for ranking importance of nodes in complex network. Simulation results show that CWCDR algorithms not only can overcome the limitation of degree ranking algorithm, but also can find important nodes in complex networks more precisely and effectively. To the shortage of small-world model and BA model, this paper proposes an evolutionary model of complex network based on CWCDR algorithms, named CWCDR model. Simulation results show that the CWCDR model accords with power-law distribution. And compare with the BA model, this model has better average shortest path length, and clustering coefficient. Therefore, the CWCDR model is more consistent with the real network.
Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.
2017-07-01
Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.
National Aeronautics and Space Administration — This article discusses several aspects of uncertainty represen- tation and management for model-based prognostics method- ologies based on our experience with Kalman...
naiveBayesCall: an efficient model-based base-calling algorithm for high-throughput sequencing.
Kao, Wei-Chun; Song, Yun S
2011-03-01
Immense amounts of raw instrument data (i.e., images of fluorescence) are currently being generated using ultra high-throughput sequencing platforms. An important computational challenge associated with this rapid advancement is to develop efficient algorithms that can extract accurate sequence information from raw data. To address this challenge, we recently introduced a novel model-based base-calling algorithm that is fully parametric and has several advantages over previously proposed methods. Our original algorithm, called BayesCall, significantly reduced the error rate, particularly in the later cycles of a sequencing run, and also produced useful base-specific quality scores with a high discrimination ability. Unfortunately, however, BayesCall is too computationally expensive to be of broad practical use. In this article, we build on our previous model-based approach to devise an efficient base-calling algorithm that is orders of magnitude faster than BayesCall, while still maintaining a comparably high level of accuracy. Our new algorithm is called naive-BayesCall, and it utilizes approximation and optimization methods to achieve scalability. We describe the performance of naiveBayesCall and demonstrate how improved base-calling accuracy may facilitate de novo assembly and SNP detection when the sequence coverage depth is low to moderate.
Directory of Open Access Journals (Sweden)
K. Roshangar
2016-09-01
Full Text Available Introduction: Exact prediction of transported sediment rate by rivers in water resources projects is of utmost importance. Basically erosion and sediment transport process is one of the most complexes hydrodynamic. Although different studies have been developed on the application of intelligent models based on neural, they are not widely used because of lacking explicitness and complexity governing on choosing and architecting of proper network. In this study, a Genetic expression programming model (as an important branches of evolutionary algorithems for predicting of sediment load is selected and investigated as an intelligent approach along with other known classical and imperical methods such as Larsen´s equation, Engelund-Hansen´s equation and Bagnold´s equation. Materials and Methods: In this study, in order to improve explicit prediction of sediment load of Gotoorchay, located in Aras catchment, Northwestern Iran latitude: 38°24´33.3˝ and longitude: 44°46´13.2˝, genetic programming (GP and Genetic Algorithm (GA were applied. Moreover, the semi-empirical models for predicting of total sediment load and rating curve have been used. Finally all the methods were compared and the best ones were introduced. Two statistical measures were used to compare the performance of the different models, namely root mean square error (RMSE and determination coefficient (DC. RMSE and DC indicate the discrepancy between the observed and computed values. Results and Discussions: The statistical characteristics results obtained from the analysis of genetic programming method for both selected model groups indicated that the model 4 including the only discharge of the river, relative to other studied models had the highest DC and the least RMSE in the testing stage (DC= 0.907, RMSE= 0.067. Although there were several parameters applied in other models, these models were complicated and had weak results of prediction. Our results showed that the model 9
An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms
Directory of Open Access Journals (Sweden)
Yushan Zhang
2015-01-01
Full Text Available Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP. This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.
THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL
Energy Technology Data Exchange (ETDEWEB)
Werth, D.; O' Steen, L.
2008-02-11
We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.
A Probability-based Evolutionary Algorithm with Mutations to Learn Bayesian Networks
Directory of Open Access Journals (Sweden)
Sho Fukuda
2014-12-01
Full Text Available Bayesian networks are regarded as one of the essential tools to analyze causal relationship between events from data. To learn the structure of highly-reliable Bayesian networks from data as quickly as possible is one of the important problems that several studies have been tried to achieve. In recent years, probability-based evolutionary algorithms have been proposed as a new efficient approach to learn Bayesian networks. In this paper, we target on one of the probability-based evolutionary algorithms called PBIL (Probability-Based Incremental Learning, and propose a new mutation operator. Through performance evaluation, we found that the proposed mutation operator has a good performance in learning Bayesian networks
A New Multiobjective Evolutionary Algorithm for Community Detection in Dynamic Complex Networks
Directory of Open Access Journals (Sweden)
Guoqiang Chen
2013-01-01
Full Text Available Community detection in dynamic networks is an important research topic and has received an enormous amount of attention in recent years. Modularity is selected as a measure to quantify the quality of the community partition in previous detection methods. But, the modularity has been exposed to resolution limits. In this paper, we propose a novel multiobjective evolutionary algorithm for dynamic networks community detection based on the framework of nondominated sorting genetic algorithm. Modularity density which can address the limitations of modularity function is adopted to measure the snapshot cost, and normalized mutual information is selected to measure temporal cost, respectively. The characteristics knowledge of the problem is used in designing the genetic operators. Furthermore, a local search operator was designed, which can improve the effectiveness and efficiency of community detection. Experimental studies based on synthetic datasets show that the proposed algorithm can obtain better performance than the compared algorithms.
Evolutionary Algorithms Applied to Antennas and Propagation: A Review of State of the Art
Directory of Open Access Journals (Sweden)
Sotirios K. Goudos
2016-01-01
Full Text Available A review of evolutionary algorithms (EAs with applications to antenna and propagation problems is presented. EAs have emerged as viable candidates for global optimization problems and have been attracting the attention of the research community interested in solving real-world engineering problems, as evidenced by the fact that very large number of antenna design problems have been addressed in the literature in recent years by using EAs. In this paper, our primary focus is on Genetic Algorithms (GAs, Particle Swarm Optimization (PSO, and Differential Evolution (DE, though we also briefly review other recently introduced nature-inspired algorithms. An overview of case examples optimized by each family of algorithms is included in the paper.
An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.
Energy Technology Data Exchange (ETDEWEB)
Gomez-Iglesias, A.; Vega-Rodriguez, M. A.; Castejon Mangana, C.; Rubio del Solar, M.; Cardenas Montes, M.
2007-07-01
In this paper we present a proposal for enhancing the configuration of a stellarator device in order to improve the performance of these fusion magnetic devices. To achieve this goal, we propose the use of grid computing with genetic and evolutionary algorithms. Grid computing allows performing many experiments in parallel way. Genetic algorithms allow avoiding for exploring the whole solution space because the number of parameters involved in the configuration of these devices and the number of combinations between these values make impossible to explore all the possibilities. (Author)
Vahid Aryadoust
2015-01-01
This study applies evolutionary algorithm-based (EA-based) symbolic regression to assess the ability of metacognitive strategy use tested by the metacognitive awareness listening questionnaire (MALQ) and lexico-grammatical knowledge to predict listening comprehension proficiency among English learners. Initially, the psychometric validity of the MALQ subscales, the lexico-grammatical test, and the listening test was examined using the logistic Rasch model and the Rasch-Andrich rating scale mo...
Development of a Multi-Objective Evolutionary Algorithm for Strain-Enhanced Quantum Cascade Lasers
Directory of Open Access Journals (Sweden)
David Mueller
2016-07-01
Full Text Available An automated design approach using an evolutionary algorithm for the development of quantum cascade lasers (QCLs is presented. Our algorithmic approach merges computational intelligence techniques with the physics of device structures, representing a design methodology that reduces experimental effort and costs. The algorithm was developed to produce QCLs with a three-well, diagonal-transition active region and a five-well injector region. Specifically, we applied this technique to Al x Ga 1 - x As/In y Ga 1 - y As strained active region designs. The algorithmic approach is a non-dominated sorting method using four aggregate objectives: target wavelength, population inversion via longitudinal-optical (LO phonon extraction, injector level coupling, and an optical gain metric. Analysis indicates that the most plausible device candidates are a result of the optical gain metric and a total aggregate of all objectives. However, design limitations exist in many of the resulting candidates, indicating need for additional objective criteria and parameter limits to improve the application of this and other evolutionary algorithm methods.
Directory of Open Access Journals (Sweden)
Qianwang Deng
2017-01-01
Full Text Available Flexible job-shop scheduling problem (FJSP is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II for multiobjective FJSP (MO-FJSP with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
Directory of Open Access Journals (Sweden)
Anesya Violita
2012-09-01
Full Text Available Optimisasi besarnya pembangkitan untuk dapat memenuhi kebutuhan beban dengan biaya seminimal mungkin merupakan salah satu masalah tersendiri dalam suatu operasi sistem tenaga listrik. Permasalahan ini sendiri lebih dikenal dengan istilah Economic Dispatch (ED. Optimisasi ED ini sendiri sudah banyak dilakukan dengan berbagai macam metode Artificial Intelligence (AI. Untuk tugas akhir ini, metode AI yang dicoba untuk diaplikasikan pada optimisasi ED ini yakni Differential Evolutionary (DE Algorithm. DE Algorithm ini akan dicoba diaplikasikan pada sistem kelistrikan Jawa-Bali 500 kV kemudian hasilnya dibandingkan dengan metode lainnya yakni Lagrange dan PSO. Hasilnya metode DE Algorithm terbukti mampu menemukan solusi optimal dari permasalahan ED dengan penghematan biaya sebesar Rp. 104,76 juta/jam atau 1,545 % dibandingkan dengan metode PSO, dan penghematan biaya pembangkitan sebesar Rp. 1.167,72 juta/jam atau 14,892 % dibandingkan metode Lagrange. Sebagai referensi, metode DE Algorithm ini juga akan disimulasikan pada sistem tenaga listrik IEEE-30 bus yang hasilnya juga akan dibandingkan dengan hasil yang didapatkan apabila menggunakan metode Lagrange dan PSO. Hasil yang didapat juga mampu membuktikan bahwa metode Differential Evolutionary (DE Algorithm juga mampu menemukan solusi lebih optimal dengan penghematan biaya sebesar 0,06 $/jam atau sekitar 0,008 % dibandingkan metode PSO, dan penghematan biaya sebesar 23,47 $/jam atau 2,92 % dibandingkan dengan metode Lagrange.
Combining model based and data based techniques in a robust bridge health monitoring algorithm.
2014-09-01
Structural Health Monitoring (SHM) aims to analyze civil, mechanical and aerospace systems in order to assess : incipient damage occurrence. In this project, we are concerned with the development of an algorithm within the : SHM paradigm for applicat...
Directory of Open Access Journals (Sweden)
C. Fernandez-Lozano
2013-01-01
Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.
Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.
2013-01-01
Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933
Dalla Vedova, Matteo Davide Lorenzo; Maggiore, Paolo
2016-01-01
In order to detect incipient failures due to a progressive wear of a primary flight command electro hydraulic actuator (EHA), prognostics could employ several approaches; the choice of the best ones is driven by the efficacy shown in failure detection, since not all the algorithms might be useful for the proposed purpose. In other words, some of them could be suitable only for certain applications while they could not give useful results for others. Developing a fault detection algorithm able...
Directory of Open Access Journals (Sweden)
Y. Tang
2006-01-01
Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small
Directory of Open Access Journals (Sweden)
Thatchai Thepphakorn
2015-01-01
Full Text Available This paper outlines the development of a new evolutionary algorithms based timetabling (EAT tool for solving course scheduling problems that include a genetic algorithm (GA and a memetic algorithm (MA. Reproduction processes may generate infeasible solutions. Previous research has used repair processes that have been applied after a population of chromosomes has been generated. This research developed a new approach which (i modified the genetic operators to prevent the creation of infeasible solutions before chromosomes were added to the population; (ii included the clonal selection algorithm (CSA; and the elitist strategy (ES to improve the quality of the solutions produced. This approach was adopted by both the GA and MA within the EAT. The MA was further modified to include hill climbing local search. The EAT program was tested using 14 benchmark timetabling problems from the literature using a sequential experimental design, which included a fractional factorial screening experiment. Experiments were conducted to (i test the performance of the proposed modified algorithms; (ii identify which factors and interactions were statistically significant; (iii identify appropriate parameters for the GA and MA; and (iv compare the performance of the various hybrid algorithms. The genetic algorithm with modified genetic operators produced an average improvement of over 50%.
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Engineering Department, Shiraz University of Technology, Shiraz (Iran)
2009-08-15
This paper introduces a robust searching hybrid evolutionary algorithm to solve the multi-objective Distribution Feeder Reconfiguration (DFR). The main objective of the DFR is to minimize the real power loss, deviation of the nodes' voltage, the number of switching operations, and balance the loads on the feeders. Because of the fact that the objectives are different and no commensurable, it is difficult to solve the problem by conventional approaches that may optimize a single objective. This paper presents a new approach based on norm3 for the DFR problem. In the proposed method, the objective functions are considered as a vector and the aim is to maximize the distance (norm2) between the objective function vector and the worst objective function vector while the constraints are met. Since the proposed DFR is a multi objective and non-differentiable optimization problem, a new hybrid evolutionary algorithm (EA) based on the combination of the Honey Bee Mating Optimization (HBMO) and the Discrete Particle Swarm Optimization (DPSO), called DPSO-HBMO, is implied to solve it. The results of the proposed reconfiguration method are compared with the solutions obtained by other approaches, the original DPSO and HBMO over different distribution test systems. (author)
Wismans, Luc Johannes Josephus; van Berkum, Eric C.; Bliemer, Michiel; Allkim, T.P.; van Arem, Bart
2010-01-01
Multi objective optimization of externalities of traffic is performed solving a network design problem in which Dynamic Traffic Management measures are used. The resulting Pareto optimal set is determined by employing the SPEA2+ evolutionary algorithm.
National Research Council Canada - National Science Library
Bator, Marcin; Nieniewski, Mariusz
2012-01-01
.... This optimization is performed by the evolutionary algorithm using an auxiliary mass classifier. Brightness along the radius of the circularly symmetric template is coded indirectly by its second derivative...
Directory of Open Access Journals (Sweden)
B. Y. Qu
2017-01-01
Full Text Available Portfolio optimization problems involve selection of different assets to invest in order to maximize the overall return and minimize the overall risk simultaneously. The complexity of the optimal asset allocation problem increases with an increase in the number of assets available to select from for investing. The optimization problem becomes computationally challenging when there are more than a few hundreds of assets to select from. To reduce the complexity of large-scale portfolio optimization, two asset preselection procedures that consider return and risk of individual asset and pairwise correlation to remove assets that may not potentially be selected into any portfolio are proposed in this paper. With these asset preselection methods, the number of assets considered to be included in a portfolio can be increased to thousands. To test the effectiveness of the proposed methods, a Normalized Multiobjective Evolutionary Algorithm based on Decomposition (NMOEA/D algorithm and several other commonly used multiobjective evolutionary algorithms are applied and compared. Six experiments with different settings are carried out. The experimental results show that with the proposed methods the simulation time is reduced while return-risk trade-off performances are significantly improved. Meanwhile, the NMOEA/D is able to outperform other compared algorithms on all experiments according to the comparative analysis.
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Cubic time algorithms of amalgamating gene trees and building evolutionary scenarios
2012-01-01
Background A long recognized problem is the inference of the supertree S that amalgamates a given set {Gj} of trees Gj, with leaves in each Gj being assigned homologous elements. We ground on an approach to find the tree S by minimizing the total cost of mappings αj of individual gene trees Gj into S. Traditionally, this cost is defined basically as a sum of duplications and gaps in each αj. The classical problem is to minimize the total cost, where S runs over the set of all trees that contain an exhaustive non-redundant set of species from all input Gj. Results We suggest a reformulation of the classical NP-hard problem of building a supertree in terms of the global minimization of the same cost functional but only over species trees S that consist of clades belonging to a fixed set P (e.g., an exhaustive set of clades in all Gj). We developed a deterministic solving algorithm with a low degree polynomial (typically cubic) time complexity with respect to the size of input data. We define an extensive set of elementary evolutionary events and suggest an original definition of mapping β of tree G into tree S. We introduce the cost functional c(G, S, f ) and define the mapping β as the global minimum of this functional with respect to the variable f, in which sense it is a generalization of classical mapping α. We suggest a reformulation of the classical NP-hard mapping (reconciliation) problem by introducing time slices into the species tree S and present a cubic time solving algorithm to compute the mapping β. We introduce two novel definitions of the evolutionary scenario based on mapping β or a random process of gene evolution along a species tree. Conclusions Developed algorithms are mathematically proved, which justifies the following statements. The supertree building algorithm finds exactly the global minimum of the total cost if only gene duplications and losses are allowed and the given sets of gene trees satisfies a certain condition. The mapping
Directory of Open Access Journals (Sweden)
Dawei Chen
2015-01-01
Full Text Available This paper analyzes the impact factors and principles of siting urban refueling stations and proposes a three-stage method. The main objective of the method is to minimize refueling vehicles’ detour time. The first stage aims at identifying the most frequently traveled road segments for siting refueling stations. The second stage focuses on adding additional refueling stations to serve vehicles whose demands are not directly satisfied by the refueling stations identified in the first stage. The last stage further adjusts and optimizes the refueling station plan generated by the first two stages. A genetic simulated annealing algorithm is proposed to solve the optimization problem in the second stage and the results are compared to those from the genetic algorithm. A case study is also conducted to demonstrate the effectiveness of the proposed method and algorithm. The results indicate the proposed method can provide practical and effective solutions that help planners and government agencies make informed refueling station location decisions.
Microcellular propagation prediction model based on an improved ray tracing algorithm.
Liu, Z-Y; Guo, L-X; Fan, T-Q
2013-11-01
Two-dimensional (2D)/two-and-one-half-dimensional ray tracing (RT) algorithms for the use of the uniform theory of diffraction and geometrical optics are widely used for channel prediction in urban microcellular environments because of their high efficiency and reliable prediction accuracy. In this study, an improved RT algorithm based on the "orientation face set" concept and on the improved 2D polar sweep algorithm is proposed. The goal is to accelerate point-to-point prediction, thereby making RT prediction attractive and convenient. In addition, the use of threshold control of each ray path and the handling of visible grid points for reflection and diffraction sources are adopted, resulting in an improved efficiency of coverage prediction over large areas. Measured results and computed predictions are also compared for urban scenarios. The results indicate that the proposed prediction model works well and is a useful tool for microcellular communication applications.
Directory of Open Access Journals (Sweden)
P. Kovacs
2010-04-01
Full Text Available The paper is focused on the automated design and optimization of electromagnetic band gap structures suppressing the propagation of surface waves. For the optimization, we use different global evolutionary algorithms like the genetic algorithm with the single-point crossover (GAs and the multi-point (GAm one, the differential evolution (DE and particle swarm optimization (PSO. The algorithms are mutually compared in terms of convergence velocity and accuracy. The developed technique is universal (applicable for any unit cell geometry. The method is based on the dispersion diagram calculation in CST Microwave Studio (CST MWS and optimization in Matlab. A design example of a mushroom structure with simultaneous electromagnetic band gap properties (EBG and the artificial magnetic conductor ones (AMC in the required frequency band is presented.
Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems
2015-01-01
This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...
Design and Optimization of Low-thrust Orbit Transfers Using Q-law and Evolutionary Algorithms
Lee, Seungwon; vonAllmen, Paul; Fink, Wolfgang; Petropoulos, Anastassios; Terrile, Richard
2005-01-01
Future space missions will depend more on low-thrust propulsion (such as ion engines) thanks to its high specific impulse. Yet, the design of low-thrust trajectories is complex and challenging. Third-body perturbations often dominate the thrust, and a significant change to the orbit requires a long duration of thrust. In order to guide the early design phases, we have developed an efficient and efficacious method to obtain approximate propellant and flight-time requirements (i.e., the Pareto front) for orbit transfers. A search for the Pareto-optimal trajectories is done in two levels: optimal thrust angles and locations are determined by Q-law, while the Q-law is optimized with two evolutionary algorithms: a genetic algorithm and a simulated-annealing-related algorithm. The examples considered are several types of orbit transfers around the Earth and the asteroid Vesta.
Directory of Open Access Journals (Sweden)
Wei Yue
2015-01-01
Full Text Available The major issues for mean-variance-skewness models are the errors in estimations that cause corner solutions and low diversity in the portfolio. In this paper, a multiobjective fuzzy portfolio selection model with transaction cost and liquidity is proposed to maintain the diversity of portfolio. In addition, we have designed a multiobjective evolutionary algorithm based on decomposition of the objective space to maintain the diversity of obtained solutions. The algorithm is used to obtain a set of Pareto-optimal portfolios with good diversity and convergence. To demonstrate the effectiveness of the proposed model and algorithm, the performance of the proposed algorithm is compared with the classic MOEA/D and NSGA-II through some numerical examples based on the data of the Shanghai Stock Exchange Market. Simulation results show that our proposed algorithm is able to obtain better diversity and more evenly distributed Pareto front than the other two algorithms and the proposed model can maintain quite well the diversity of portfolio. The purpose of this paper is to deal with portfolio problems in the weighted possibilistic mean-variance-skewness (MVS and possibilistic mean-variance-skewness-entropy (MVS-E frameworks with transaction cost and liquidity and to provide different Pareto-optimal investment strategies as diversified as possible for investors at a time, rather than one strategy for investors at a time.
Directory of Open Access Journals (Sweden)
Steve O'Hagan
Full Text Available Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock, nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC, but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic search space (G-algorithms with some (albeit well-tuned ones that do not (F-algorithms. For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any 'prior knowledge' of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information.
Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.
Vertex shading of the three-dimensional model based on ray-tracing algorithm
Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan
2016-10-01
Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.
Ahmed, Qasim Zeeshan
2015-02-01
In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.
Directory of Open Access Journals (Sweden)
Hui Lu
2014-01-01
Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.
Directory of Open Access Journals (Sweden)
Jiale Gao
2017-01-01
Full Text Available The adaptive operator selection (AOS and the adaptive parameter control are widely used to enhance the search power in many multiobjective evolutionary algorithms. This paper proposes a novel adaptive selection strategy with bandits for the multiobjective evolutionary algorithm based on decomposition (MOEA/D, named latest stored information based adaptive selection (LSIAS. An improved upper confidence bound (UCB method is adopted in the strategy, in which the operator usage rate and abandonment of extreme fitness improvement are introduced to improve the performance of UCB. The strategy uses a sliding window to store recent valuable information about operators, such as factors, probabilities, and efficiency. Four common used DE operators are chosen with the AOS, and two kinds of assist information on operator are selected to improve the operators search power. The operator information is updated with the help of LSIAS and the resulting algorithmic combination is called MOEA/D-LSIAS. Compared to some well-known MOEA/D variants, the LSIAS demonstrates the superior robustness and fast convergence for various multiobjective optimization problems. The comparative experiments also demonstrate improved search power of operators with different assist information on different problems.
[In Silico Drug Design Using an Evolutionary Algorithm and Compound Database].
Kawai, Kentaro; Takahashi, Yoshimasa
2016-01-01
Computational drug design plays an important role in the discovery of new drugs. Recently, we proposed an algorithm for designing new drug-like molecules utilizing the structure of a known active molecule. To design molecules, three types of fragments (ring, linker, and side-chain fragments) were defined as building blocks, and a fragment library was prepared from molecules listed in G protein-coupled receptor (GPCR)-SARfari database. An evolutionary algorithm which executes evolutionary operations, such as crossover, mutation, and selection, was implemented to evolve the molecules. As a case study, some GPCRs were selected for computational experiments in which we tried to design ligands from simple seed fragments using the Tanimoto coefficient as a fitness function. The results showed that the algorithm could be used successfully to design new molecules with structural similarity, scaffold variety, and chemical validity. In addition, a docking study revealed that these designed molecules also exhibited shape complementarity with the binding site of the target protein. Therefore, this is expected to become a powerful tool for designing new drug-like molecules in drug discovery projects.
Directory of Open Access Journals (Sweden)
Min-Yin Liu
2017-05-01
Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.
A Short-Term Photovoltaic Power Prediction Model Based on an FOS-ELM Algorithm
Directory of Open Access Journals (Sweden)
Jidong Wang
2017-04-01
Full Text Available With the increasing proportion of photovoltaic (PV power in power systems, the problem of its fluctuation and intermittency has become more prominent. To reduce the negative influence of the use of PV power, we propose a short-term PV power prediction model based on the online sequential extreme learning machine with forgetting mechanism (FOS-ELM, which can constantly replace outdated data with new data. We use historical weather data and historical PV power data to predict the PV power in the next period of time. The simulation result shows that this model has the advantages of a short training time and high accuracy. This model can help the power dispatch department schedule generation plans as well as support spatial and temporal compensation and coordinated power control, which is important for the security and stability as well as the optimal operation of power systems.
A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm
Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing
2018-01-01
To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.
Pan, Xinyi; Li, Cheng; Ying, Kui; Weng, Dehe; Qin, Wen; Li, Kuncheng
2010-04-01
A model-based proton resonance frequency shift (PRFS) thermometry method was developed to significantly reduce the temperature quantification errors encountered in the conventional phase mapping method and the spatiotemporal limitations of the spectroscopic thermometry method. Spectral data acquired using multi-echo gradient echo (GRE) is fit into a two-component signal model containing temperature information and fat is used as the internal reference. The noniterative extended Prony algorithm is used for the signal fitting and frequency estimate. Monte Carlo simulations demonstrate the advantages of the method for optimal water-fat separation and temperature estimation accuracy. Phantom experiments demonstrate that the model-based method effectively reduces the interscan motion effects and frequency disturbances due to the main field drift. The thermometry result of ex vivo goose liver experiment with high intensity focused ultrasound (HIFU) heating was also presented in the paper to indicate the feasibility of the model-based method in real tissue. Copyright 2010 Elsevier Inc. All rights reserved.
SYNTHESIS OF DUAL RADIATION PATTERN OF RECTANGULAR PLANAR ARRAY ANTENNA USING EVOLUTIONARY ALGORITHM
Directory of Open Access Journals (Sweden)
Debasis Mandal
2015-09-01
Full Text Available A pattern synthesis method based on Evolutionary Algorithm is presented to generate a dual radiation pattern from a planar array of isotropic antennas. The desired patterns are obtained by finding out optimum set of elements excitations. Flat-top and Pencil beams share a common optimum amplitude distribution among the array elements. Flat-top beam is generated by updating the zero phases with the optimum phases among the elements. 4 bit discrete amplitudes and 5 bit discrete phases have been taken to simplify the design of the feed network. Results clearly show the effectiveness of the proposed method.
Design of wavelength selective concentrator for micro PV/TPV systems using evolutionary algorithm.
Yamada, Noboru; Ijiro, Toshikazu
2011-07-04
This paper describes the results of exploring photonic structures that behave as wavelength selective concentrators (WSCs) of solar/thermal radiation. An evolutionary algorithm was combined with the finite-difference time-domain method (EA-FDTD) to determine the optimum photonic structure that can concentrate a designated wavelength range of beam solar radiation and diffusive thermal radiation in such a manner that the range matches the photosensitivity of micro photovoltaic and thermophotovoltaic cells. Our EA-FDTD method successfully generated a photonic structure capable of performing wavelength selective concentration close to the theoretical limit. Our WSC design concept can be successfully extended to three-dimensional structures to further enhance efficiency.
NodIO, a JavaScript framework for volunteer-based evolutionary algorithms : first results
Merelo, Juan-J.; García-Valdez, Mario; Castillo, Pedro A.; García-Sánchez, Pablo; Cuevas, P. de las; Rico, Nuria
2016-01-01
JavaScript is an interpreted language mainly known for its inclusion in web browsers, making them a container for rich Internet based applications. This has inspired its use, for a long time, as a tool for evolutionary algorithms, mainly so in browser-based volunteer computing environments. Several libraries have also been published so far and are in use. However, the last years have seen a resurgence of interest in the language, becoming one of the most popular and thus spawning the improvem...
Directory of Open Access Journals (Sweden)
Jingjing Ma
2014-01-01
Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
Jafari, Mohieddin; Mirzaie, Mehdi; Sadeghi, Mehdi
2015-10-05
In the field of network science, exploring principal and crucial modules or communities is critical in the deduction of relationships and organization of complex networks. This approach expands an arena, and thus allows further study of biological functions in the field of network biology. As the clustering algorithms that are currently employed in finding modules have innate uncertainties, external and internal validations are necessary. Sequence and network structure alignment, has been used to define the Interlog Protein Network (IPN). This network is an evolutionarily conserved network with communal nodes and less false-positive links. In the current study, the IPN is employed as an evolution-based benchmark in the validation of the module finding methods. The clustering results of five algorithms; Markov Clustering (MCL), Restricted Neighborhood Search Clustering (RNSC), Cartographic Representation (CR), Laplacian Dynamics (LD) and Genetic Algorithm; to find communities in Protein-Protein Interaction networks (GAPPI) are assessed by IPN in four distinct Protein-Protein Interaction Networks (PPINs). The MCL shows a more accurate algorithm based on this evolutionary benchmarking approach. Also, the biological relevance of proteins in the IPN modules generated by MCL is compatible with biological standard databases such as Gene Ontology, KEGG and Reactome. In this study, the IPN shows its potential for validation of clustering algorithms due to its biological logic and straightforward implementation.
Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.
2017-12-01
Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.
DEFF Research Database (Denmark)
Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard
2015-01-01
Non-trivial real world decision-making processes usually involve multiple parties having potentially conflicting interests over a set of issues. State-of-the-art multi-objective evolutionary algorithms (MOEA) are well known to solve this class of complex real-world problems. In this paper, we...... compare the performance of state-of-the-art multi-objective evolutionary algorithms to solve a non-linear multi-objective multi-issue optimisation problem found in Greenhouse climate control. The chosen algorithms in the study includes NSGAII, eNSGAII, eMOEA, PAES, PESAII and SPEAII. The performance...
Synthesis of Steered Flat-top Beam Pattern Using Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
D. Mandal
2016-12-01
Full Text Available In this paper a pattern synthesis method based on Evolutionary Algorithm is presented. A Flat-top beam pattern has been generated from a concentric ring array of isotropic elements by finding out the optimum set of elements amplitudes and phases using Differential Evolution algorithm. The said pattern is generated in three predefined azimuth planes instate of a single phi plane and also verified for a range of azimuth plane for the same optimum excitations. The main beam is steered to an elevation angle of 30 degree with lower peak SLL and ripple. Dynamic range ratio (DRR is also being improved by eliminating the weakly excited array elements, which simplify the design complexity of feed networks.
Co-Evolutionary Algorithm for Motion Planning of Two Industrial Robots with Overlapping Workspaces
Directory of Open Access Journals (Sweden)
Petar Curkovic
2013-01-01
Full Text Available A high level of autonomy is a prerequisite for achieving robotic presence in a broad spectrum of work environments. If there is more than one robot in a given environment and the workspaces of robots are shared, then the robots present a dynamic obstacle to each other, which is a potentially dangerous situation. This paper deals with the problem of motion planning for two six-degrees-of-freedom (DOF industrial robots whose workspaces overlap. The planning is based on a novel hall of fame - Pareto-based co-evolutionary algorithm. The modification of the algorithm is directed towards speeding-up co-evolution, to achieve real-time implementation in an industrial robotic system composed of two FANUC LrMate 200iC robots. The results of the simulation and implementation show the great potential of the method in terms of convergence, robustness and time.
Efficient hybrid evolutionary algorithm for optimization of a strip coiling process
Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik
2015-04-01
This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.
Kuang, Shang-qi; Gong, Xue-peng; Yang, Hai-gui
2017-11-01
In order to refine the layered structure of extreme ultraviolet multilayers, a multi-objective evolutionary algorithm which is post-hybridized with the standard Levenberg-Marquardt algorithm is applied to analyze the grazing incidence X-ray reflectivity (GIXR) and the normal incidence extreme ultraviolet reflectance (EUVR). In this procedure, the GIXR data and EUVR data are simultaneously fitted as two objectives, and the high sensitivities of these two sets of data to layer thicknesses and densities are combined. This set of mathematical procedures is conducive to obtain a more correct model of periodic multilayers which can simultaneously describe both GIXR and EUVR measurements. As a result, the layered structure of Mo/Si multilayers with a period of about 7.0 nm is obtained.
Wang, Chun; Ji, Zhicheng; Wang, Yan
2017-07-01
In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.
Directory of Open Access Journals (Sweden)
M. Frutos
2013-01-01
Full Text Available Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Parameter identification of ZnO surge arrester models based on genetic algorithms
Energy Technology Data Exchange (ETDEWEB)
Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)
2008-07-15
The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)
Xu, Lin
2014-10-01
Sudden cardiac arrest is one of the critical clinical syndromes in emergency situations. A cardiopulmonary resuscitation (CPR) is a necessary curing means for those patients with sudden cardiac arrest. In order to simulate effectively the hemodynamic effects of human under AEI-CPR, which is active compression-decompression CPR coupled with enhanced external counter-pulsation and inspiratory impedance threshold valve, and research physiological parameters of each part of lower limbs in more detail, a CPR simulation model established by Babbs was refined. The part of lower limbs was divided into iliac, thigh and calf, which had 15 physiological parameters. Then, these 15 physiological parameters based on genetic algorithm were optimized, and ideal simulation results were obtained finally.
Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France
2011-09-20
In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug. Copyright © 2011 John Wiley & Sons, Ltd.
Self-organization of nodes in mobile ad hoc networks using evolutionary games and genetic algorithms
Directory of Open Access Journals (Sweden)
Janusz Kusyk
2011-07-01
Full Text Available In this paper, we present a distributed and scalable evolutionary game played by autonomous mobile ad hoc network (MANET nodes to place themselves uniformly over a dynamically changing environment without a centralized controller. A node spreading evolutionary game, called NSEG, runs at each mobile node, autonomously makes movement decisions based on localized data while the movement probabilities of possible next locations are assigned by a forced-based genetic algorithm (FGA. Because FGA takes only into account the current position of the neighboring nodes, our NSEG, combining FGA with game theory, can find better locations. In NSEG, autonomous node movement decisions are based on the outcome of the locally run FGA and the spatial game set up among it and the nodes in its neighborhood. NSEG is a good candidate for the node spreading class of applications used in both military tasks and commercial applications. We present a formal analysis of our NSEG to prove that an evolutionary stable state is its convergence point. Simulation experiments demonstrate that NSEG performs well with respect to network area coverage, uniform distribution of mobile nodes, and convergence speed.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
Rabotyagov, Sergey; Campbell, Todd; Valcu, Adriana; Gassman, Philip; Jha, Manoj; Schilling, Keith; Wolter, Calvin; Kling, Catherine
2012-12-09
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,(5,12,20)) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods(3,4,9,10,13-15,17-19,22,23,25). In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model(7) with a
Directory of Open Access Journals (Sweden)
Devidas G. Jadhav
2014-01-01
Full Text Available The Swine Influenza Model Based Optimization (SIMBO family is a newly introduced speedy optimization technique having the adaptive features in its mechanism. In this paper, the authors modified the SIMBO to make the algorithm further quicker. As the SIMBO family is faster, it is a better option for searching the basin. Thus, it is utilized in local searches in developing the proposed memetic algorithms (MAs. The MA has a faster speed compared to SIMBO with the balance in exploration and exploitation. So, MAs have small tradeoffs in convergence velocity for comprehensively optimizing the numerical standard benchmark test bed having functions with different properties. The utilization of SIMBO in the local searching is inherently the exploitation of better characteristics of the algorithms employed for the hybridization. The developed MA is applied to eliminate the power line interference (PLI from the biomedical signal ECG with the use of adaptive filter whose weights are optimized by the MA. The inference signal required for adaptive filter is obtained using the selective reconstruction of ECG from the intrinsic mode functions (IMFs of empirical mode decomposition (EMD.
Directory of Open Access Journals (Sweden)
Yong Tian
2014-12-01
Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.
A standard deviation selection in evolutionary algorithm for grouper fish feed formulation
Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul
2016-10-01
Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.
Energy Technology Data Exchange (ETDEWEB)
David J. Muth Jr.
2006-09-01
This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.
Energy efficient model based algorithm for control of building HVAC systems.
Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N
2015-11-01
Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely
DEFF Research Database (Denmark)
Tian, Yihui; Govindan, Kannan; Zhu, Qinghua
2014-01-01
In this study, a system dynamics (SD) model is developed to guide the subsidy policies to promote the diffusion of green supply chain management (GSCM) in China. The relationships of stakeholders such as government, enterprises and consumers are analyzed through evolutionary game theory. Finally......, the GSCM diffusion process is simulated by the model with a case study on Chinese automotive manufacturing industry. The results show that the subsidies for manufacturers are better than that for consumers to promote GSCM diffusion, and the environmental awareness is another influential key factor....
Finding an Optimal Location of Line Facility using Evolutionary Algorithm and Integer Program
Taji, Takenao; Tanigawa, Shin-Ichi; Kamiyama, Naoyuki; Katoh, Naoki; Takizawa, Atsushi
In this paper, we consider the problem for determining an optimal location of a line facility in a city such as railway system. A line facility is modeled as a spanning tree embedded on the plane whose vertices represent stations and edges represent the rails connecting two stations, and people can travel not only by walk but also by using the line facility quickly. Suppose there are a finite number of towns in a city, only in which people lives. Then, our problem is to find a location of the stations as well as a connection of the stations such that the sum of travel time between all pairs of towns is minimum. Tsukamoto, Katoh and Takizawa proposed a heuristic algorithm for the problem which consists of two phases as follows. In the first phase, fixing the position of stations, it determines the topology of the line facility. The second phase optimizes the position of stations while the topology is fixed. The algorithm alternately executes these two phases until a converged solution is obtained. Tsukamoto et al. determined the topology of the line facility by solving minimum spanning tree (MST) in the first phase. In this paper, we propose two methods for determining the topology of the line facility so that the sum of travel time is minimized hoping to improving the previous algorithm. The first proposed method heuristically determine the topology by using evolutionary algorithm (EA). In the second method, we reduce our problem to minimum communication spanning tree (MCST) problem by making a further assumption, and solve it by formulating the problem as an integer program. We show the effectiveness of our proposed algorithm through the numerical experiments.
An evolutionary algorithm for the segmentation of muscles and bones of the lower limb.
Lpez, Marco A.; Braidot, A.; Sattler, Anbal; Schira, Claudia; Uriburu, E.
2016-04-01
In the field of medical image segmentation, muscles segmentation is a problem that has not been fully resolved yet. This is due to the fact that the basic assumption of image segmentation, which asserts that a visual distinction should ex- ist between the different structures to be identified, is infringed. As the tissue composition of two different muscles is the same, it becomes extremely difficult to distinguish one another if they are near. We have developed an evolutionary algorithm which selects the set and the sequence of morphological operators that better segments muscles and bones from an MRI image. The achieved results shows that the developed algorithm presents average sensitivity values close to 75% in the segmentation of the different processed muscles and bones. It also presents average specificity values close to 93% for the same structures. Furthermore, the algorithm can identify muscles that are closely located through the path from their origin point to their insertions, with very low error values (below 7%) .
Heimann, Tobias; Münzing, Sascha; Meinzer, Hans-Peter; Wolf, Ivo
2007-01-01
We present a novel method for the segmentation of volumetric images, which is especially suitable for highly variable soft tissue structures. Core of the algorithm is a statistical shape model (SSM) of the structure of interest. A global search with an evolutionary algorithm is employed to detect suitable initial parameters for the model, which are subsequently optimized by a local search similar to the Active Shape mechanism. After that, a deformable mesh with the same topology as the SSM is used for the final segmentation: While external forces strive to maximize the posterior probability of the mesh given the local appearance around the boundary, internal forces governed by tension and rigidity terms keep the shape similar to the underlying SSM. To prevent outliers and increase robustness, we determine the applied external forces by an algorithm for optimal surface detection with smoothness constraints. The approach is evaluated on 54 CT images of the liver and reaches an average surface distance of 1.6 +/- 0.5 mm in comparison to manual reference segmentations.
Directory of Open Access Journals (Sweden)
José-Fernando Camacho-Vallejo
2014-01-01
Full Text Available This research highlights the use of game theory to solve the classical problem of the uncapacitated facility location optimization model with customer order preferences through a bilevel approach. The bilevel model provided herein consists of the classical facility location problem and an optimization of the customer preferences, which are the upper and lower level problems, respectively. Also, two reformulations of the bilevel model are presented, reducing it into a mixed-integer single-level problem. An evolutionary algorithm based on the equilibrium in a Stackelberg’s game is proposed to solve the bilevel model. Numerical experimentation is performed in this study and the results are compared to benchmarks from the existing literature on the subject in order to emphasize the benefits of the proposed approach in terms of solution quality and estimation time.
Novel stable structure of Li3PS4 predicted by evolutionary algorithm under high-pressure
Directory of Open Access Journals (Sweden)
S. Iikubo
2018-01-01
Full Text Available By combining theoretical predictions and in-situ X-ray diffraction under high pressure, we found a novel stable crystal structure of Li3PS4 under high pressures. At ambient pressure, Li3PS4 shows successive structural transitions from γ-type to β-type and from β-type to α type with increasing temperature, as is well established. In this study, an evolutionary algorithm successfully predicted the γ-type crystal structure at ambient pressure and further predicted a possible stable δ-type crystal structures under high pressure. The stability of the obtained structures is examined in terms of both static and dynamic stability by first-principles calculations. In situ X-ray diffraction using a synchrotron radiation revealed that the high-pressure phase is the predicted δ-Li3PS4 phase.
Evolutionary algorithm for analyzing higher degree research student recruitment and completion
Directory of Open Access Journals (Sweden)
Ruhul Sarker
2015-12-01
Full Text Available In this paper, we consider a decision problem arising from higher degree research student recruitment process in a university environment. The problem is to recruit a number of research students by maximizing the sum of a performance index satisfying a number of constraints, such as supervision capacity and resource limitation. The problem is dynamic in nature as the number of eligible applicants, the supervision capacity, completion time, funding for scholarships, and other resources vary from period to period and they are difficult to predict in advance. In this research, we have developed a mathematical model to represent this dynamic decision problem and adopted an evolutionary algorithm-based approach to solve the problem. We have demonstrated how the recruitment decision can be made with a defined objective and how the model can be used for long-run planning for improvement of higher degree research program.
DEFF Research Database (Denmark)
Kulkarni, Nandkumar P.; Prasad, Neeli R.; Prasad, Ramjee
change dynamically. In this paper, the authors put forward an Evolutionary Mobility aware multi-objective hybrid Routing Protocol for heterogeneous wireless sensor networks (EMRP). EMRP uses two-level hierarchical clustering. EMRP selects the optimal path from source to sink using multiple metrics...... such as Average Energy consumption, Control Overhead, Reaction Time, LQI, and HOP Count. The authors study the influence of energy heterogeneity and mobility of sensor nodes on the performance of EMRP. The Performance of EMRP compared with Simple Hybrid Routing Protocol (SHRP) and Dynamic Multi-Objective Routing...... Algorithm (DyMORA) using metrics such as Average Residual Energy (ARE), Delay and Normalized Routing Load. EMRP improves AES by a factor of 4.93% as compared to SHRP and 5.15% as compared to DyMORA. EMRP has a 6% lesser delay as compared with DyMORA....
An Extensible Component-Based Multi-Objective Evolutionary Algorithm Framework
DEFF Research Database (Denmark)
Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard
2017-01-01
The ability to easily modify the problem definition is currently missing in Multi-Objective Evolutionary Algorithms (MOEA). Existing MOEA frameworks do not support dynamic addition and extension of the problem formulation. The existing frameworks require a re-specification of the problem definition...... and recompilation of source code implementing the problem specification. The presented, Controleum framework is based on Dynamic Links and a component-based system to support dynamic reconfiguration of the problem formulation without any need for recompilation of source code. Four different experiments...... with different compositions of objectives from the horticulture domain are formulated based on a state of the art micro-climate simulator, electricity prices and weather forecasts. The experimental results demonstrate that the Controleum framework support dynamic reconfiguration of the problem formulation...
Energy Technology Data Exchange (ETDEWEB)
Fernandes, D.H.; Medeiros, A.R. [Subsea7, Niteroi, RJ (Brazil); Jacob, B.P.; Lima, B.S.L.P.; Albrecht, C.H. [Universidade Federaldo Rio de Janeiro (COPPE/UFRJ), RJ (Brazil). Coordenacao de Programas de Pos-graduacao em Engenharia
2009-07-01
This work presents studies regarding the determination of optimal pipeline routes for offshore applications. The assembly of an objective function is presented; this function can be later associated with Evolutionary Algorithm to implement a computational tool for the automatic determination of the most advantageous pipeline route for a given scenario. This tool may reduce computational overheads, avoid mistakes with route interpretation, and minimize costs with respect to submarine pipeline design and installation. The following aspects can be considered in the assembly of the objective function: Geophysical and geotechnical data obtained from the bathymetry and sonography; the influence of the installation method, total pipeline length and number of free spans to be mitigated along the routes as well as vessel time for both cases. Case studies are presented to illustrate the use of the proposed objective function, including a sensitivity analysis intended to identify the relative influence of selected parameters in the evaluation of different routes. (author)
WH-EA: An Evolutionary Algorithm for Wiener-Hammerstein System Identification
Directory of Open Access Journals (Sweden)
J. Zambrano
2018-01-01
Full Text Available Current methods to identify Wiener-Hammerstein systems using Best Linear Approximation (BLA involve at least two steps. First, BLA is divided into obtaining front and back linear dynamics of the Wiener-Hammerstein model. Second, a refitting procedure of all parameters is carried out to reduce modelling errors. In this paper, a novel approach to identify Wiener-Hammerstein systems in a single step is proposed. This approach is based on a customized evolutionary algorithm (WH-EA able to look for the best BLA split, capturing at the same time the process static nonlinearity with high precision. Furthermore, to correct possible errors in BLA estimation, the locations of poles and zeros are subtly modified within an adequate search space to allow a fine-tuning of the model. The performance of the proposed approach is analysed by using a demonstration example and a nonlinear system identification benchmark.
XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction
Avery, Patrick; Falls, Zackary; Zurek, Eva
2018-01-01
Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.
Directory of Open Access Journals (Sweden)
Saraiva J. T.
2012-10-01
Full Text Available The basic objective of Transmission Expansion Planning (TEP is to schedule a number of transmission projects along an extended planning horizon minimizing the network construction and operational costs while satisfying the requirement of delivering power safely and reliably to load centres along the horizon. This principle is quite simple, but the complexity of the problem and the impact on society transforms TEP on a challenging issue. This paper describes a new approach to solve the dynamic TEP problem, based on an improved discrete integer version of the Evolutionary Particle Swarm Optimization (EPSO meta-heuristic algorithm. The paper includes sections describing in detail the EPSO enhanced approach, the mathematical formulation of the TEP problem, including the objective function and the constraints, and a section devoted to the application of the developed approach to this problem. Finally, the use of the developed approach is illustrated using a case study based on the IEEE 24 bus 38 branch test system.
Optimization of constrained multiple-objective reliability problems using evolutionary algorithms
Energy Technology Data Exchange (ETDEWEB)
Salazar, Daniel [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain) and Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: danielsalazaraponte@gmail.com; Rocco, Claudio M. [Facultad de Ingenieria, Universidad Central Venezuela, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve; Galvan, Blas J. [Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Division de Computacion Evolutiva y Aplicaciones (CEANI), Universidad de Las Palmas de Gran Canaria, Islas Canarias (Spain)]. E-mail: bgalvan@step.es
2006-09-15
This paper illustrates the use of multi-objective optimization to solve three types of reliability optimization problems: to find the optimal number of redundant components, find the reliability of components, and determine both their redundancy and reliability. In general, these problems have been formulated as single objective mixed-integer non-linear programming problems with one or several constraints and solved by using mathematical programming techniques or special heuristics. In this work, these problems are reformulated as multiple-objective problems (MOP) and then solved by using a second-generation Multiple-Objective Evolutionary Algorithm (MOEA) that allows handling constraints. The MOEA used in this paper (NSGA-II) demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker with a complete picture of the optimal solution space. Finally, the advantages of both MOP and MOEA approaches are illustrated by solving four redundancy problems taken from the literature.
MOEA/D-ACO: a multiobjective evolutionary algorithm using decomposition and AntColony.
Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto
2013-12-01
Combining ant colony optimization (ACO) and the multiobjective evolutionary algorithm (EA) based on decomposition (MOEA/D), this paper proposes a multiobjective EA, i.e., MOEA/D-ACO. Following other MOEA/D-like algorithms, MOEA/D-ACO decomposes a multiobjective optimization problem into a number of single-objective optimization problems. Each ant (i.e., agent) is responsible for solving one subproblem. All the ants are divided into a few groups, and each ant has several neighboring ants. An ant group maintains a pheromone matrix, and an individual ant has a heuristic information matrix. During the search, each ant also records the best solution found so far for its subproblem. To construct a new solution, an ant combines information from its group's pheromone matrix, its own heuristic information matrix, and its current solution. An ant checks the new solutions constructed by itself and its neighbors, and updates its current solution if it has found a better one in terms of its own objective. Extensive experiments have been conducted in this paper to study and compare MOEA/D-ACO with other algorithms on two sets of test problems. On the multiobjective 0-1 knapsack problem,MOEA/D-ACO outperforms the MOEA/D with conventional genetic operators and local search on all the nine test instances. We also demonstrate that the heuristic information matrices in MOEA/D-ACO are crucial to the good performance of MOEA/D-ACO for the knapsack problem. On the biobjective traveling salesman problem, MOEA/D-ACO performs much better than the BicriterionAnt on all the 12 test instances. We also evaluate the effects of grouping, neighborhood, and the location information of current solutions on the performance of MOEA/D-ACO. The work in this paper shows that reactive search optimization scheme, i.e., the "learning while optimizing" principle, is effective in improving multiobjective optimization algorithms.
Directory of Open Access Journals (Sweden)
Faisal Shabbir
2017-12-01
Full Text Available Dynamic properties such as natural frequencies and mode shapes are directly affected by damage in structures. In this paper, changes in natural frequencies and mode shapes were used as the input to various objective functions for damage detection. Objective functions related to natural frequencies, mode shapes, modal flexibility and modal strain energy have been used, and their performances have been analyzed in varying noise conditions. Three beams were analyzed: two of which were simulated beams with single and multiple damage scenarios and one was an experimental beam. In order to do this, SAP 2000 (v14, Computers and Structures Inc., Berkeley, CA, United States, 2009 is linked with MATLAB (r2015, The MathWorks, Inc., Natick, MA, United States, 2015. The genetic algorithm (GA, an evolutionary algorithm (EA, was used to update the damaged structure for damage detection. Due to the degradation of the performance of objective functions in varying noisy conditions, a modified objective function based on the concept of regularization has been proposed, which can be effectively used in combination with EA. All three beams were used to validate the proposed procedure. It has been found that the modified objective function gives better results even in noisy and actual experimental conditions.
An Evolutionary Algorithm of the Regional Collaborative Innovation Based on Complex Network
Directory of Open Access Journals (Sweden)
Kun Wang
2016-01-01
Full Text Available This paper proposed a new perspective to study the evolution of regional collaborative innovation based on complex network theory. The two main conceptions of evolution, “graph with dynamic features” and “network evolution,” have been provided in advance. Afterwards, we illustrate the overall architecture and capability model of the regional collaborative innovation system, which contains several elements and participants. Therefore, we can definitely assume that the regional collaborative innovation system could be regarded as a complex network model. In the proposed evolutionary algorithm, we consider that each node in the network could only connect to less than a certain amount of neighbors, and the extreme value is determined by its importance. Through the derivation, we have created a probability density function as the most important constraint and supporting condition of our simulation experiments. Then, a case study was performed to explore the network topology and validate the effectiveness of our algorithm. All the raw datasets were obtained from the official website of the National Bureau of Statistic of China and some other open sources. Finally, some meaningful recommendations were presented to policy makers, especially based on the experimental results and some common conclusions of complex networks.
HYBRID EVOLUTIONARY ALGORITHMS FOR FREQUENCY AND VOLTAGE CONTROL IN POWER GENERATING SYSTEM
Directory of Open Access Journals (Sweden)
A. Soundarrajan
2010-10-01
Full Text Available Power generating system has the responsibility to ensure that adequate power is delivered to the load, both reliably and economically. Any electrical system must be maintained at the desired operating level characterized by nominal frequency and voltage profile. But the ability of the power system to track the load is limited due to physical and technical consideration. Hence, a Power System Control is required to maintain a continuous balance between power generation and load demand. The quality of power supply is affected due to continuous and random changes in load during the operation of the power system. Load Frequency Controller (LFC and Automatic Voltage Regulator (AVR play an important role in maintaining constant frequency and voltage in order to ensure the reliability of electric power. The fixed gain PID controllers used for this application fail to perform under varying load conditions and hence provide poor dynamic characteristics with large settling time, overshoot and oscillations. In this paper, Evolutionary Algorithms (EA like, Enhanced Particle Swarm Optimization (EPSO, Multi Objective Particle Swarm Optimization (MOPSO, and Stochastic Particle Swarm Optimization (SPSO are proposed to overcome the premature convergence problem in a standard PSO. These algorithms reduce transient oscillations and also increase the computational efficiency. Simulation results demonstrate that the proposed controller adapt themselves appropriate to varying loads and hence provide better performance characteristics with respect to settling time, oscillations and overshoot.
Analysis of Ant Colony Optimization and Population-Based Evolutionary Algorithms on Dynamic Problems
DEFF Research Database (Denmark)
Lissovoi, Andrei
settings: λ-MMAS on Dynamic Shortest Path Problems. We investigate how in-creasing the number of ants simulated per iteration may help an ACO algorithm to track optimum in a dynamic problem. It is shown that while a constant number of ants per-vertex is sufficient to track some oscillations, there also...... exist more complex oscillations that cannot be tracked with a polynomial-size colony. MMAS and (μ+1) EA on Maze We analyse the behaviour of a (μ + 1) EA with genotype diversity on a dynamic fitness function Maze, extended to a finite-alphabet search space. We prove that the (μ + 1) EA is able to track...... the dynamic optimum for finite alphabets up to size μ, while MMAS is able to do so for any finite alphabet size. Parallel Evolutionary Algorithms on Maze. We prove that while a (1 + λ) EA is unable to track the optimum of the dynamic fitness function Maze for offspring population size up to λ = O(n1-ε...
Directory of Open Access Journals (Sweden)
Wiktor HUDY
2013-12-01
Full Text Available In this paper, the impact of regulators set and their types for the characteristic of rotational speed of induction motor was researched.. The evolutionary algorithm was used as optimization tool. Results were verified with using MATLAB/Simulink.
Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R
2017-02-15
Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection.
Choo, Ji Yung; Goo, Jin Mo; Lee, Chang Hyun; Park, Chang Min; Park, Sang Joon; Shim, Mi-Suk
2014-04-01
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. • Computed tomography is increasingly used to provide objective measurements of intra-thoracic structures. • Iterative reconstruction algorithms can affect quantitative measurements of lung and airways. • Care should be taken in selecting reconstruction algorithms in longitudinal analysis. • Model-based iterative reconstruction seems to provide the most accurate airway measurements.
Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.
2016-04-01
Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms.
Directory of Open Access Journals (Sweden)
Lucas Helms
Full Text Available Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.
Directory of Open Access Journals (Sweden)
Yulong Ying
2015-01-01
Full Text Available In the lifespan of a gas turbine engine, abrupt faults and performance degradation of its gas-path components may happen; however the performance degradation is not easily foreseeable when the level of degradation is small. Gas path analysis (GPA method has been widely applied to monitor gas turbine engine health status as it can easily obtain the magnitudes of the detected component faults. However, when the number of components within engine is large or/and the measurement noise level is high, the smearing effect may be strong and the degraded components may not be recognized. In order to improve diagnostic effect, a nonlinear steady-state model based gas turbine health status estimation approach with improved particle swarm optimization algorithm (PSO-GPA has been proposed in this study. The proposed approach has been tested in ten test cases where the degradation of a model three-shaft marine engine has been analyzed. These case studies have shown that the approach can accurately search and isolate the degraded components and further quantify the degradation for major gas-path components. Compared with the typical GPA method, the approach has shown better measurement noise immunity and diagnostic accuracy.
Lipinski, Piotr
This paper concerns the quadratic three-dimensional assignment problem (Q3AP), an extension of the quadratic assignment problem (QAP), and proposes an efficient hybrid evolutionary algorithm combining stochastic optimization and local search with a number of crossover operators, a number of mutation operators and an auto-adaptation mechanism. Auto-adaptation manages the pool of evolutionary operators applying different operators in different computation phases to better explore the search space and to avoid premature convergence. Local search additionally optimizes populations of candidate solutions and accelerates evolutionary search. It uses a many-core graphics processor to optimize a number of solutions in parallel, which enables its incorporation into the evolutionary algorithm without excessive increases in the computation time. Experiments performed on benchmark Q3AP instances derived from the classic QAP instances proposed by Nugent et al. confirmed that the proposed algorithm is able to find optimal solutions to Q3AP in a reasonable time and outperforms best known results found in the literature.
Li, Zhangtao; Liu, Jing; Wu, Kai
2017-08-16
Most of the existing community detection algorithms are based on vertex connectivity. While in many real networks, each vertex usually has one or more attributes describing its properties which are often homogeneous in a cluster. Such networks can be modeled as attributed graphs, whose attributes sometimes are equally important to topological structure in graph clustering. One important challenge is to detect communities considering both topological structure and vertex properties simultaneously. To this propose, a multiobjective evolutionary algorithm based on structural and attribute similarities (MOEA-SA) is first proposed to solve the attributed graph clustering problems in this paper. In MOEA-SA, a new objective named as attribute similarity SA is proposed and another objective employed is the modularity Q. A hybrid representation is used and a neighborhood correction strategy is designed to repair the wrongly assigned genes through making balance between structural and attribute information. Moreover, an effective multi-individual-based mutation operator is designed to guide the evolution toward the good direction. The performance of MOEA-SA is validated on several real Facebook attributed graphs and several ego-networks with multiattribute. Two measurements, namely density T and entropy E, are used to evaluate the quality of communities obtained. Experimental results demonstrate the effectiveness of MOEA-SA and the systematic comparisons with existing methods show that MOEA-SA can get better values of T and E in each graph and find more relevant communities with practical meanings. Knee points corresponding to the best compromise solutions are calculated to guide decision makers to make convenient choices.
Adaptive quantum control of two photon fluorescence on Coumarin 30 by using evolutionary algorithm
Poudel, Milan; Kolomenski, Alexender; Schuessler, Hans
2006-10-01
Two-photon excitation fluorescence of complex molecules (Coumarin-30) is successfully optimized by using feedback control pulse shaping technique. For such an optimization we have implemented an evolutionary algorithm [1], [2] in a Lab-view programming environment with a liquid crystal pulse shaper in a folded 4f set up. In the algorithm, one generation uses 48 individuals (vectors of voltage on the LC matrix).For each generation the fitness value is measured for every setting of the mask. A new generation is built from the previous by combining parents (the fittest individuals) and producing the desired degree of mutations (changes of the vector elements by some random value) to provide reasonable convergence. By successive repetition of this scheme, individuals corresponding to the highest fitness values will survive and produce offspring's for subsequent generations. Typically, convergence to the optimum value was achieved after 30 generations. Without any prior knowledge of the molecular system, the optimization goal was automatically achieved by changing the spectral phases [3]. The pulses before and after optimization were measured with GRENOUILLE, a type of second harmonic frequency resolved optical gating (SH FROG). To find the efficient pulse with lower intensity, three types of optimization were performed, the two photon fluorescence signal, the second harmonic signal and the ratio between them. The Intensity of two photon fluorescence of coumarin-30 could be increased noticeably compared to the transform limited pulse optimizing the second harmonic generation. The experimental results appear to be the potential applications of coherent control to the complicated molecular system as well as in bio medical imaging.
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine
2016-01-01
Background Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. Purpose To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Material and Methods Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. Results The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. Conclusion The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique. PMID:27635253
Pappas, Eleftherios P.; Zoros, Emmanouil; Moutsatsos, Argyris; Peppa, Vasiliki; Zourari, Kyveli; Karaiskos, Pantelis; Papagiannis, Panagiotis
2017-05-01
There is an acknowledged need for the design and implementation of physical phantoms appropriate for the experimental validation of model-based dose calculation algorithms (MBDCA) introduced recently in 192Ir brachytherapy treatment planning systems (TPS), and this work investigates whether it can be met. A PMMA phantom was prepared to accommodate material inhomogeneities (air and Teflon), four plastic brachytherapy catheters, as well as 84 LiF TLD dosimeters (MTS-100M 1 × 1 × 1 mm3 microcubes), two radiochromic films (Gafchromic EBT3) and a plastic 3D dosimeter (PRESAGE). An irradiation plan consisting of 53 source dwell positions was prepared on phantom CT images using a commercially available TPS and taking into account the calibration dose range of each detector. Irradiation was performed using an 192Ir high dose rate (HDR) source. Dose to medium in medium, Dmm , was calculated using the MBDCA option of the same TPS as well as Monte Carlo (MC) simulation with the MCNP code and a benchmarked methodology. Measured and calculated dose distributions were spatially registered and compared. The total standard (k = 1) spatial uncertainties for TLD, film and PRESAGE were: 0.71, 1.58 and 2.55 mm. Corresponding percentage total dosimetric uncertainties were: 5.4-6.4, 2.5-6.4 and 4.85, owing mainly to the absorbed dose sensitivity correction and the relative energy dependence correction (position dependent) for TLD, the film sensitivity calibration (dose dependent) and the dependencies of PRESAGE sensitivity. Results imply a LiF over-response due to a relative intrinsic energy dependence between 192Ir and megavoltage calibration energies, and a dose rate dependence of PRESAGE sensitivity at low dose rates (required for the full characterization of dosimeter response for 192Ir and the reduction of experimental uncertainties.
Model-based x-ray energy spectrum estimation algorithm from CT scanning data with spectrum filter
Li, Lei; Wang, Lin-Yuan; Yan, Bin
2016-10-01
With the development of technology, the traditional X-ray CT can't meet the modern medical and industry needs for component distinguish and identification. This is due to the inconsistency of X-ray imaging system and reconstruction algorithm. In the current CT systems, X-ray spectrum produced by X-ray source is continuous in energy range determined by tube voltage and energy filter, and the attenuation coefficient of object is varied with the X-ray energy. So the distribution of X-ray energy spectrum plays an important role for beam-hardening correction, dual energy CT image reconstruction or dose calculation. However, due to high ill-condition and ill-posed feature of system equations of transmission measurement data, statistical fluctuations of X ray quantum and noise pollution, it is very hard to get stable and accurate spectrum estimation using existing methods. In this paper, a model-based X-ray energy spectrum estimation method from CT scanning data with energy spectrum filter is proposed. First, transmission measurement data were accurately acquired by CT scan and measurement using phantoms with different energy spectrum filter. Second, a physical meaningful X-ray tube spectrum model was established with weighted gaussian functions and priori information such as continuity of bremsstrahlung and specificity of characteristic emission and estimation information of average attenuation coefficient. The parameter in model was optimized to get the best estimation result for filtered spectrum. Finally, the original energy spectrum was reconstructed from filtered spectrum estimation with filter priori information. Experimental results demonstrate that the stability and accuracy of X ray energy spectrum estimation using the proposed method are improved significantly.
Generalizing and learning protein-DNA binding sequence representations by an evolutionary algorithm
Wong, Ka Chun
2011-02-05
Protein-DNA bindings are essential activities. Understanding them forms the basis for further deciphering of biological and genetic systems. In particular, the protein-DNA bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) play a central role in gene transcription. Comprehensive TF-TFBS binding sequence pairs have been found in a recent study. However, they are in one-to-one mappings which cannot fully reflect the many-to-many mappings within the bindings. An evolutionary algorithm is proposed to learn generalized representations (many-to-many mappings) from the TF-TFBS binding sequence pairs (one-to-one mappings). The generalized pairs are shown to be more meaningful than the original TF-TFBS binding sequence pairs. Some representative examples have been analyzed in this study. In particular, it shows that the TF-TFBS binding sequence pairs are not presumably in one-to-one mappings. They can also exhibit many-to-many mappings. The proposed method can help us extract such many-to-many information from the one-to-one TF-TFBS binding sequence pairs found in the previous study, providing further knowledge in understanding the bindings between TFs and TFBSs. © 2011 Springer-Verlag.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Smith, R.; Kasprzyk, J. R.; Zagona, E. A.
2013-12-01
Population growth and climate change, combined with difficulties in building new infrastructure, motivate portfolio-based solutions to ensuring sufficient water supply. Powerful simulation models with graphical user interfaces (GUI) are often used to evaluate infrastructure portfolios; these GUI based models require manual modification of the system parameters, such as reservoir operation rules, water transfer schemes, or system capacities. Multiobjective evolutionary algorithm (MOEA) based optimization can be employed to balance multiple objectives and automatically suggest designs for infrastructure systems, but MOEA based decision support typically uses a fixed problem formulation (i.e., a single set of objectives, decisions, and constraints). This presentation suggests a dynamic framework for linking GUI-based infrastructure models with MOEA search. The framework begins with an initial formulation which is solved using a MOEA. Then, stakeholders can interact with candidate solutions, viewing their properties in the GUI model. This is followed by changes in the formulation which represent users' evolving understanding of exigent system properties. Our case study is built using RiverWare, an object-oriented, data-centered model that facilitates the representation of a diverse array of water resources systems. Results suggest that assumptions within the initial MOEA search are violated after investigating tradeoffs and reveal how formulations should be modified to better capture stakeholders' preferences.
Optimization of thin noise barrier designs using Evolutionary Algorithms and a Dual BEM Formulation
Toledo, R.; Aznárez, J. J.; Maeso, O.; Greiner, D.
2015-01-01
This work aims at assessing the acoustic efficiency of different thin noise barrier models. These designs frequently feature complex profiles and their implementation in shape optimization processes may not always be easy in terms of determining their topological feasibility. A methodology to conduct both overall shape and top edge optimizations of thin cross section acoustic barriers by idealizing them as profiles with null boundary thickness is proposed. This procedure is based on the maximization of the insertion loss of candidate profiles proposed by an evolutionary algorithm. The special nature of these sorts of barriers makes necessary the implementation of a complementary formulation to the classical Boundary Element Method (BEM). Numerical simulations of the barriers' performance are conducted by using a 2D Dual BEM code in eight different barrier configurations (covering overall shaped and top edge configurations; spline curved and polynomial shaped based designs; rigid and noise absorbing boundaries materials). While results are achieved by using a specific receivers' scheme, the influence of the receivers' location on the acoustic performance is previously addressed. With the purpose of testing the methodology here presented, a numerical model validation on the basis of experimental results from a scale model test [34] is conducted. Results obtained show the usefulness of representing complex thin barrier configurations as null boundary thickness-like models.
Predicting peptides binding to MHC class II molecules using multi-objective evolutionary algorithms
Directory of Open Access Journals (Sweden)
Feng Lin
2007-11-01
Full Text Available Abstract Background Peptides binding to Major Histocompatibility Complex (MHC class II molecules are crucial for initiation and regulation of immune responses. Predicting peptides that bind to a specific MHC molecule plays an important role in determining potential candidates for vaccines. The binding groove in class II MHC is open at both ends, allowing peptides longer than 9-mer to bind. Finding the consensus motif facilitating the binding of peptides to a MHC class II molecule is difficult because of different lengths of binding peptides and varying location of 9-mer binding core. The level of difficulty increases when the molecule is promiscuous and binds to a large number of low affinity peptides. In this paper, we propose two approaches using multi-objective evolutionary algorithms (MOEA for predicting peptides binding to MHC class II molecules. One uses the information from both binders and non-binders for self-discovery of motifs. The other, in addition, uses information from experimentally determined motifs for guided-discovery of motifs. Results The proposed methods are intended for finding peptides binding to MHC class II I-Ag7 molecule – a promiscuous binder to a large number of low affinity peptides. Cross-validation results across experiments on two motifs derived for I-Ag7 datasets demonstrate better generalization abilities and accuracies of the present method over earlier approaches. Further, the proposed method was validated and compared on two publicly available benchmark datasets: (1 an ensemble of qualitative HLA-DRB1*0401 peptide data obtained from five different sources, and (2 quantitative peptide data obtained for sixteen different alleles comprising of three mouse alleles and thirteen HLA alleles. The proposed method outperformed earlier methods on most datasets, indicating that it is well suited for finding peptides binding to MHC class II molecules. Conclusion We present two MOEA-based algorithms for finding motifs
Yasaka, Koichiro; Katsura, Masaki; Hanaoka, Shouhei; Sato, Jiro; Ohtomo, Kuni
2016-03-01
To compare the image quality of high-resolution computed tomography (HRCT) for evaluating lung nodules reconstructed with the new version of model-based iterative reconstruction and spatial resolution preference algorithm (MBIRn) vs. conventional model-based iterative reconstruction (MBIRc) and adaptive statistical iterative reconstruction (ASIR). This retrospective clinical study was approved by our institutional review board and included 70 lung nodules in 58 patients (mean age, 71.2±10.9years; 34 men and 24 women). HRCT of lung nodules were reconstructed using MBIRn, MBIRc and ASIR. Objective image noise was measured by placing the regions of interest on lung parenchyma. Two blinded radiologists performed subjective image analyses. Significant improvements in the following points were observed in MBIRn compared with ASIR (preconstructed with MBIRn provides diagnostically more acceptable images for the detailed analyses of lung nodules compared with MBIRc and ASIR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Rasmussen, Thomas Kiel; Krink, Thiemo
2003-11-01
Multiple sequence alignment (MSA) is one of the basic problems in computational biology. Realistic problem instances of MSA are computationally intractable for exact algorithms. One way to tackle MSA is to use Hidden Markov Models (HMMs), which are known to be very powerful in the related problem domain of speech recognition. However, the training of HMMs is computationally hard and there is no known exact method that can guarantee optimal training within reasonable computing time. Perhaps the most powerful training method is the Baum-Welch algorithm, which is fast, but bears the problem of stagnation at local optima. In the study reported in this paper, we used a hybrid algorithm combining particle swarm optimization with evolutionary algorithms to train HMMs for the alignment of protein sequences. Our experiments show that our approach yields better alignments for a set of benchmark protein sequences than the most commonly applied HMM training methods, such as Baum-Welch and Simulated Annealing.
Saborido, Rubén; Ruiz, Ana B; Luque, Mariano
2017-01-01
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.
Li, Ruowang; Dudek, Scott M; Kim, Dokyoon; Hall, Molly A; Bradford, Yuki; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; McCarty, Catherine A; Bao, Le; Ritchie, Marylyn D
2016-01-01
The future of medicine is moving towards the phase of precision medicine, with the goal to prevent and treat diseases by taking inter-individual variability into account. A large part of the variability lies in our genetic makeup. With the fast paced improvement of high-throughput methods for genome sequencing, a tremendous amount of genetics data have already been generated. The next hurdle for precision medicine is to have sufficient computational tools for analyzing large sets of data. Genome-Wide Association Studies (GWAS) have been the primary method to assess the relationship between single nucleotide polymorphisms (SNPs) and disease traits. While GWAS is sufficient in finding individual SNPs with strong main effects, it does not capture potential interactions among multiple SNPs. In many traits, a large proportion of variation remain unexplained by using main effects alone, leaving the door open for exploring the role of genetic interactions. However, identifying genetic interactions in large-scale genomics data poses a challenge even for modern computing. For this study, we present a new algorithm, Grammatical Evolution Bayesian Network (GEBN) that utilizes Bayesian Networks to identify interactions in the data, and at the same time, uses an evolutionary algorithm to reduce the computational cost associated with network optimization. GEBN excelled in simulation studies where the data contained main effects and interaction effects. We also applied GEBN to a Type 2 diabetes (T2D) dataset obtained from the Marshfield Personalized Medicine Research Project (PMRP). We were able to identify genetic interactions for T2D cases and controls and use information from those interactions to classify T2D samples. We obtained an average testing area under the curve (AUC) of 86.8 %. We also identified several interacting genes such as INADL and LPP that are known to be associated with T2D. Developing the computational tools to explore genetic associations beyond main
Directory of Open Access Journals (Sweden)
Tim eHolmes
2013-12-01
Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics
Holmes, Tim; Zanker, Johannes M
2013-01-01
Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the
Nishiyama, Yukako; Tada, Keiji; Nishiyama, Yuichi; Mori, Hiroshi; Maruyama, Mitsunari; Katsube, Takashi; Yamamoto, Nobuko; Kanayama, Hidekazu; Yamamoto, Yasushi; Kitagaki, Hajime
2016-11-01
Some iterative reconstruction algorithms are useful for reducing the radiation dose in pediatric cardiac CT. A new iterative reconstruction algorithm (forward-projected model-based iterative reconstruction solution) has been developed, but its usefulness for radiation dose reduction in pediatric cardiac CT is unknown. To investigate the effect of the new algorithm on CT image quality and on radiation dose in pediatric cardiac CT. We obtained phantom data at six dose levels, as well as pediatric cardiac CT data, and reconstructed CT images using filtered back projection, adaptive iterative dose reduction 3-D (AIDR 3-D) and the new algorithm. We evaluated phantom image quality using physical assessment. Four radiologists performed visual evaluation of cardiac CT image quality. In the phantom study, the new algorithm effectively suppressed noise in the low-dose range and moderately generated modulation transfer function, yielding a higher signal-to-noise ratio compared with filtered back projection or AIDR 3-D. When clinical cardiac CT was performed, images obtained by the new method had less perceived image noise and better tissue contrast at similar resolution compared with AIDR 3-D images. The new algorithm reduced image noise at moderate resolution in low-dose CT scans and improved the perceived quality of cardiac CT images to some extent. This new algorithm might be superior to AIDR 3-D for radiation dose reduction in pediatric cardiac CT.
Energy Technology Data Exchange (ETDEWEB)
Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)
2014-04-15
To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)
Ma, Zhanshan (Sam)
Competition, cooperation and communication are the three fundamental relationships upon which natural selection acts in the evolution of life. Evolutionary game theory (EGT) is a 'marriage' between game theory and Darwin's evolution theory; it gains additional modeling power and flexibility by adopting population dynamics theory. In EGT, natural selection acts as optimization agents and produces inherent strategies, which eliminates some essential assumptions in traditional game theory such as rationality and allows more realistic modeling of many problems. Prisoner's Dilemma (PD) and Sir Philip Sidney (SPS) games are two well-known examples of EGT, which are formulated to study cooperation and communication, respectively. Despite its huge success, EGT exposes a certain degree of weakness in dealing with time-, space- and covariate-dependent (i.e., dynamic) uncertainty, vulnerability and deception. In this paper, I propose to extend EGT in two ways to overcome the weakness. First, I introduce survival analysis modeling to describe the lifetime or fitness of game players. This extension allows more flexible and powerful modeling of the dynamic uncertainty and vulnerability (collectively equivalent to the dynamic frailty in survival analysis). Secondly, I introduce agreement algorithms, which can be the Agreement algorithms in distributed computing (e.g., Byzantine Generals Problem [6][8], Dynamic Hybrid Fault Models [12]) or any algorithms that set and enforce the rules for players to determine their consensus. The second extension is particularly useful for modeling dynamic deception (e.g., asymmetric faults in fault tolerance and deception in animal communication). From a computational perspective, the extended evolutionary game theory (EEGT) modeling, when implemented in simulation, is equivalent to an optimization methodology that is similar to evolutionary computing approaches such as Genetic algorithms with dynamic populations [15][17].
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
QIU, X.; HANSEN, C. H.
2001-03-01
Previous work has demonstrated the potential for the active control of transformer noise using a combination of acoustic and vibration actuators and the filtered-x LMS algorithm (FXLMS), the latter being implemented to make the system adaptive. For a large electrical transformer, the number of actuators and error sensors needed to achieve a significant global noise reduction can be up to hundreds, and this makes the convergence of the FXLMS algorithm very slow. The memory requirement for the cancellation path transfer functions (CPTF) and the computation load required to pre-filter the reference signal by all the CPTFs are relatively large. On the other hand, not only the transformer noise but also the CPTF varies considerably from day to day, which makes on-line CPTF modelling very necessary. A new adaptive algorithm based on waveform synthesis is proposed, and the perturbation method is used to obtain the CPTF on-line. A comparison of the performance of the proposed algorithm with the FXLMS algorithm and the H-TAG algorithm shows the feasibility of the algorithm for the control of a slowly time-varying system with just a few fixed frequency components.
Frasch, Jonathan Lemoine
Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970's. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex
National Research Council Canada - National Science Library
Pan, Xinyi; Li, Cheng; Li, Kuncheng; Ying, Kui; Weng, Dehe; Qin, Wen
2010-01-01
.... The noniterative extended Prony algorithm is used for the signal fitting and frequency estimate. Monte Carlo simulations demonstrate the advantages of the method for optimal water-fat separation and temperature estimation accuracy...
Kun Zhang; Zhao Hu; Xiao-Ting Gan; Jian-Bo Fang
2016-01-01
Due to the fact that the fluctuation of network traffic is affected by various factors, accurate prediction of network traffic is regarded as a challenging task of the time series prediction process. For this purpose, a novel prediction method of network traffic based on QPSO algorithm and fuzzy wavelet neural network is proposed in this paper. Firstly, quantum-behaved particle swarm optimization (QPSO) was introduced. Then, the structure and operation algorithms of WFNN are presented. The pa...
Directory of Open Access Journals (Sweden)
M. K. Sakharov
2015-01-01
Full Text Available This paper deals with the development and software implementation of the hybrid multi-memetic algorithm for distributed computing systems. The main algorithm is based on the modification of MEC algorithm proposed by the authors. The multi-memetic algorithm utilizes three various local optimization methods. Software implementation was developed using MPI for Python and tested on a grid network made of twenty desktop computers. Performance of the proposed algorithm and its software implementation was investigated using multi-dimensional multi-modal benchmark functions from CEC’14.
Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu
2016-01-08
Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong
2017-10-01
The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Exploratory Analysis of an On-line Evolutionary Algorithm for in Simulated Robots
Haasdijk, E.; Smit, S.K.; Eiben, A.E.
2012-01-01
In traditional evolutionary robotics, robot controllers are evolved in a separate design phase preceding actual deployment; we call this off-line evolution. Alternatively, robot controllers can evolve while the robots perform their proper tasks, during the actual operational phase; we call this
Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.
2011-12-01
Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass
Marghany, M.
2015-06-01
Oil spill pollution has a substantial role in damaging the marine ecosystem. Oil spill that floats on top of water, as well as decreasing the fauna populations, affects the food chain in the ecosystem. In fact, oil spill is reducing the sunlight penetrates the water, limiting the photosynthesis of marine plants and phytoplankton. Moreover, marine mammals for instance, disclosed to oil spills their insulating capacities are reduced, and so making them more vulnerable to temperature variations and much less buoyant in the seawater. This study has demonstrated a design tool for oil spill detection in SAR satellite data using optimization of Entropy based Multi-Objective Evolutionary Algorithm (E-MMGA) which based on Pareto optimal solutions. The study also shows that optimization entropy based Multi-Objective Evolutionary Algorithm provides an accurate pattern of oil slick in SAR data. This shown by 85 % for oil spill, 10 % look-alike and 5 % for sea roughness using the receiver-operational characteristics (ROC) curve. The E-MMGA also shows excellent performance in SAR data. In conclusion, E-MMGA can be used as optimization for entropy to perform an automatic detection of oil spill in SAR satellite data.
Directory of Open Access Journals (Sweden)
Kun Zhang
2016-01-01
Full Text Available Due to the fact that the fluctuation of network traffic is affected by various factors, accurate prediction of network traffic is regarded as a challenging task of the time series prediction process. For this purpose, a novel prediction method of network traffic based on QPSO algorithm and fuzzy wavelet neural network is proposed in this paper. Firstly, quantum-behaved particle swarm optimization (QPSO was introduced. Then, the structure and operation algorithms of WFNN are presented. The parameters of fuzzy wavelet neural network were optimized by QPSO algorithm. Finally, the QPSO-FWNN could be used in prediction of network traffic simulation successfully and evaluate the performance of different prediction models such as BP neural network, RBF neural network, fuzzy neural network, and FWNN-GA neural network. Simulation results show that QPSO-FWNN has a better precision and stability in calculation. At the same time, the QPSO-FWNN also has better generalization ability, and it has a broad prospect on application.
Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam
2018-04-01
Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods through the evolutionary algorithms. In addition
Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P
2018-02-01
The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Bohui Zhu
2013-01-01
Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.
Directory of Open Access Journals (Sweden)
A. Chatterjee
2010-12-01
Full Text Available Reduction of sidelobe level in concentric ring arrays results in wide first null beamwidth (FNBW. Theauthors propose a pattern synthesis method based on modified Particle Swarm Optimization (PSO algorithm and Differential Evolution (DE algorithm to reduce sidelobe level while keeping the first null beamwidth (FNBW fixed or variable. This is achieved by optimizing both ring spacing and number of elements in each ring of a concentric circular ring array of uniformly excited isotropic antennas. The first null beamwidth is attempted to be made equal to or less than that of a uniformly excited and 0.5 ? spaced concentric circular ring array of same number of elements and same number of rings. The comparative performance of modified Particle Swarm Optimization (PSO algorithm and Differential Evolution (DE algorithms based on this particular problem in terms of FNBW, sidelobe level and computational time is also studied.
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.
Directory of Open Access Journals (Sweden)
Robert P Bywater
Full Text Available Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before.
Directory of Open Access Journals (Sweden)
Juliano Rodrigues Brianeze
2009-12-01
Full Text Available This work presents three of the main evolutionary algorithms: Genetic Algorithm, Evolution Strategy and Evolutionary Programming, applied to microstrip antennas design. Efficiency tests were performed, considering the analysis of key physical and geometrical parameters, evolution type, numerical random generators effects, evolution operators and selection criteria. These algorithms were validated through design of microstrip antennas based on the Resonant Cavity Method, and allow multiobjective optimizations, considering bandwidth, standing wave ratio and relative material permittivity. The optimal results obtained with these optimization processes, were confirmed by CST Microwave Studio commercial package.Este trabajo presenta tres de los principales algoritmos evolutivos: Algoritmo Genético, Estrategia Evolutiva y Programación Evolutiva, aplicados al diseño de antenas de microlíneas (microstrip. Se realizaron pruebas de eficiencia de los algoritmos, considerando el análisis de los parámetros físicos y geométricos, tipo de evolución, efecto de generación de números aleatorios, operadores evolutivos y los criterios de selección. Estos algoritmos fueron validados a través del diseño de antenas de microlíneas basado en el Método de Cavidades Resonantes y permiten optimizaciones multiobjetivo, considerando ancho de banda, razón de onda estacionaria y permitividad relativa del dieléctrico. Los resultados óptimos obtenidos fueron confirmados a través del software comercial CST Microwave Studio.
A maximal clique based multiobjective evolutionary algorithm for overlapping community detection
Wen, Xuyun; Chen, Wei-Neng; Lin, Ying; Gu, Tianlong; Zhang, Huaxiang; Li, Yun; Yin, Yilong; Zhang, Jun
2016-01-01
Detecting community structure has become one im-portant technique for studying complex networks. Although many community detection algorithms have been proposed, most of them focus on separated communities, where each node can be-long to only one community. However, in many real-world net-works, communities are often overlapped with each other. De-veloping overlapping community detection algorithms thus be-comes necessary. Along this avenue, this paper proposes a maxi-mal clique based multiob...
Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi
2018-01-01
When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Xiaogang Qi
2015-01-01
Full Text Available Wireless sensor network (WSN is a classical self-organizing communication network, and its topology evolution currently becomes one of the attractive issues in this research field. Accordingly, the problem is divided into two subproblems: one is to design a new preferential attachment method and the other is to analyze the dynamics of the network topology evolution. To solve the first subproblem, a revised PageRank algorithm, called Con-rank, is proposed to evaluate the node importance upon the existing node contraction, and then a novel preferential attachment is designed based on the node importance calculated by the proposed Con-rank algorithm. To solve the second one, we firstly analyze the network topology evolution dynamics in a theoretical way and then simulate the evolution process. Theoretical analysis proves that the network topology evolution of our model agrees with power-law distribution, and simulation results are well consistent with our conclusions obtained from the theoretical analysis and simultaneously show that our topology evolution model is superior to the classic BA model in the average path length and the clustering coefficient, and the network topology is more robust and can tolerate the random attacks.
Directory of Open Access Journals (Sweden)
Ping Jiang
2015-01-01
Full Text Available The establishment of electrical power system cannot only benefit the reasonable distribution and management in energy resources, but also satisfy the increasing demand for electricity. The electrical power system construction is often a pivotal part in the national and regional economic development plan. This paper constructs a hybrid model, known as the E-MFA-BP model, that can forecast indices in the electrical power system, including wind speed, electrical load, and electricity price. Firstly, the ensemble empirical mode decomposition can be applied to eliminate the noise of original time series data. After data preprocessing, the back propagation neural network model is applied to carry out the forecasting. Owing to the instability of its structure, the modified firefly algorithm is employed to optimize the weight and threshold values of back propagation to obtain a hybrid model with higher forecasting quality. Three experiments are carried out to verify the effectiveness of the model. Through comparison with other traditional well-known forecasting models, and models optimized by other optimization algorithms, the experimental results demonstrate that the hybrid model has the best forecasting performance.
Amian, M.; Setarehdan, S. Kamaledin; Yousefi, H.
2014-09-01
Functional Near infrared spectroscopy (fNIRS) is a newly noninvasive way to measure oxy hemoglobin and deoxy hemoglobin concentration changes of human brain. Relatively safe and affordable than other functional imaging techniques such as fMRI, it is widely used for some special applications such as infant examinations and pilot's brain monitoring. In such applications, fNIRS data sometimes suffer from undesirable movements of subject's head which called motion artifact and lead to a signal corruption. Motion artifact in fNIRS data may result in fallacy of concluding or diagnosis. In this work we try to reduce these artifacts by a novel Kalman filtering algorithm that is based on an autoregressive moving average (ARMA) model for fNIRS system. Our proposed method does not require to any additional hardware and sensor and also it does not need to whole data together that once were of ineluctable necessities in older algorithms such as adaptive filter and Wiener filtering. Results show that our approach is successful in cleaning contaminated fNIRS data.
Directory of Open Access Journals (Sweden)
Jiani Heng
2016-01-01
Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.
Directory of Open Access Journals (Sweden)
Xue Mei
2014-01-01
Full Text Available Multimodality image registration and fusion has complementary significance for guiding dental implant surgery. As the needs of the different resolution image registration, we develop an improved Iterative Closest Point (ICP algorithm that focuses on the registration of Cone Beam Computed Tomography (CT image and high-resolution Blue-light scanner image. The proposed algorithm includes two major phases, coarse and precise registration. Firstly, for reducing the matching interference of human subjective factors, we extract feature points based on curvature characteristics and use the improved three point’s translational transformation method to realize coarse registration. Then, the feature point set and reference point set, obtained by the initial registered transformation, are processed in the precise registration step. Even with the unsatisfactory initial values, this two steps registration method can guarantee the global convergence and the convergence precision. Experimental results demonstrate that the method has successfully realized the registration of the Cone Beam CT dental model and the blue-ray scanner model with higher accuracy. So the method could provide researching foundation for the relevant software development in terms of the registration of multi-modality medical data.
Multi-Working Modes Product-Color Planning Based on Evolutionary Algorithms and Swarm Intelligence
Directory of Open Access Journals (Sweden)
Man Ding
2010-01-01
Full Text Available In order to assist designer in color planning during product development, a novel synthesized evaluation method is presented to evaluate color-combination schemes of multi-working modes products (MMPs. The proposed evaluation method considers color-combination images in different working modes as evaluating attributes, to which the corresponding weights are assigned for synthesized evaluation. Then a mathematical model is developed to search for optimal color-combination schemes of MMP based on the proposed evaluation method and two powerful search techniques known as Evolution Algorithms (EAs and Swarm Intelligence (SI. In the experiments, we present a comparative study for two EAs, namely, Genetic Algorithm (GA and Difference Evolution (DE, and one SI algorithm, namely, Particle Swarm Optimization (PSO, on searching for color-combination schemes of MMP problem. All of the algorithms are evaluated against a test scenario, namely, an Arm-type aerial work platform, which has two working modes. The results show that the DE obtains the superior solution than the other two algorithms for color-combination scheme searching problem in terms of optimization accuracy and computation robustness. Simulation results demonstrate that the proposed method is feasible and efficient.
Directory of Open Access Journals (Sweden)
Boyang Qu
2017-12-01
Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.
Hybrid local search algorithm via evolutionary avalanches for spin glass based portfolio selection
Directory of Open Access Journals (Sweden)
Majid Vafaei Jahan
2012-07-01
As shown in this paper, this strategy can lead to faster rate of convergence and improved performance than conventional SA and EO algorithm. The resulting are then used to solve the portfolio selection multi-objective problem that is a non-deterministic polynomial complete (NPC problem. This is confirmed by test results of five of the world’s major stock markets, reliability test and phase transition diagram; and finally, the convergence speed is compared to other heuristic methods such as Neural Network (NN, Tabu Search (TS, and Genetic Algorithm (GA.
Effectively Tackling Reinsurance Problems by Using Evolutionary and Swarm Intelligence Algorithms
Directory of Open Access Journals (Sweden)
Sancho Salcedo-Sanz
2014-04-01
Full Text Available This paper is focused on solving different hard optimization problems that arise in the field of insurance and, more specifically, in reinsurance problems. In this area, the complexity of the models and assumptions considered in the definition of the reinsurance rules and conditions produces hard black-box optimization problems (problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program, which must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in this kind of mathematical problem, so new computational paradigms must be applied to solve these problems. In this paper, we show the performance of two evolutionary and swarm intelligence techniques (evolutionary programming and particle swarm optimization. We provide an analysis in three black-box optimization problems in reinsurance, where the proposed approaches exhibit an excellent behavior, finding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.
Energy Technology Data Exchange (ETDEWEB)
Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)
2017-08-15
To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)
Holdsworth, Clay; Kim, Minsun; Liao, Jay; Phillips, Mark
2012-04-01
To evaluate how a more flexible and thorough multiobjective search of feasible IMRT plans affects performance in IMRT optimization. A multiobjective evolutionary algorithm (MOEA) was used as a tool to investigate how expanding the search space to include a wider range of penalty functions affects the quality of the set of IMRT plans produced. The MOEA uses a population of IMRT plans to generate new IMRT plans through deterministic minimization of recombined penalty functions that are weighted sums of multiple, tissue-specific objective functions. The quality of the generated plans are judged by an independent set of nonconvex, clinically relevant decision criteria, and all dominated plans are eliminated. As this process repeats itself, better plans are produced so that the population of IMRT plans will approach the Pareto front. Three different approaches were used to explore the effects of expanding the search space. First, the evolutionary algorithm used genetic optimization principles to search by simultaneously optimizing both the weights and tissue-specific dose parameters in penalty functions. Second, penalty function parameters were individually optimized for each voxel in all organs at risk (OARs) in the MOEA. Finally, a heuristic voxel-specific improvement (VSI) algorithm that can be used on any IMRT plan was developed that incrementally improves voxel-specific penalty function parameters for all structures (OARs and targets). Different approaches were compared using the concept of domination comparison applied to the sets of plans obtained by multiobjective optimization. MOEA optimizations that simultaneously searched both importance weights and dose parameters generated sets of IMRT plans that were superior to sets of plans produced when either type of parameter was fixed for four example prostate plans. The amount of improvement increased with greater overlap between OARs and targets. Allowing the MOEA to search for voxel-specific penalty functions
Arnaout, A.; Fruhwirth, R.; Winter, M.; Esmael, B.; Thonhauser, G.
2012-04-01
The use of neural networks and advanced machine learning techniques in the oil & gas industry is a growing trend in the market. Especially in drilling oil & gas wells, prediction and monitoring different drilling parameters is an essential task to prevent serious problems like "Kick", "Lost Circulation" or "Stuck Pipe" among others. The hookload represents the weight load of the drill string at the crane hook. It is one of the most important parameters. During drilling the parameter "Weight on Bit" is controlled by the driller whereby the hookload is the only measure to monitor how much weight on bit is applied to the bit to generate the hole. Any changes in weight on bit will be directly reflected at the hookload. Furthermore any unwanted contact between the drill string and the wellbore - potentially leading to stuck pipe problem - will appear directly in the measurements of the hookload. Therefore comparison of the measured to the predicted hookload will not only give a clear idea on what is happening down-hole, it also enables the prediction of a number of important events that may cause problems in the borehole and yield in some - fortunately rare - cases in catastrophes like blow-outs. Heuristic models using highly sophisticated neural networks were designed for the hookload prediction; the training data sets were prepared in cooperation with drilling experts. Sensor measurements as well as a set of derived feature channels were used as input to the models. The contents of the final data set can be separated into (1) features based on rig operation states, (2) real-time sensors features and (3) features based on physics. A combination of novel neural network architecture - the Completely Connected Perceptron and parallel learning techniques which avoid trapping into local error minima - was used for building the models. In addition automatic network growing algorithms and highly sophisticated stopping criterions offer robust and efficient estimation of the
Mochnacki, Bohdan; Majchrzak, Ewa; Paruch, Marek
2018-01-01
In the paper the soft tissue freezing process is considered. The tissue sub-domain is subjected to the action of cylindrical cryoprobe. Thermal processes proceeding in the domain considered are described using the dual-phase lag equation (DPLE) supplemented by the appropriate boundary and initial conditions. DPLE results from the generalization of the Fourier law in which two lag times are introduced (relaxation and thermalization times). The aim of research is the identification of these parameters on the basis of measured cooling curves at the set of points selected from the tissue domain. To solve the problem the evolutionary algorithms are used. The paper contains the mathematical model of the tissue freezing process, the very short information concerning the numerical solution of the basic problem, the description of the inverse problem solution and the results of computations.
Directory of Open Access Journals (Sweden)
Marek A. Jakubowski
2014-11-01
Full Text Available At the beginning we would like to provide a short description of the new theory of learning in the digital age called connectivism. It is the integration of principles explored by the following theories: chaos, network, complexity and self-organization. Next, we describe in short new visual solutions for the teaching of writing so called multimodal literacy 5–11. We define and describe the following notions: multimodal text and original theory so called NOS (non-optimum systems methodology as a basis for new methods of visual solutions at the classes and audiovisual texts applications. Especially, we would like to emphasize the tremendous usefulness of evolutionary algorithms VEGA and NSGA as tools for optimal planning of multimodal composition in teaching texts. Finally, we give some examples of didactic texts for classrooms, which provide a deep insight into learning skills and tasks needed in the Internet age.
Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo
2013-12-01
One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Oraei Zare, S.; Saghafian, B.; Shamsai, A.; Nazif, S.
2012-01-01
Urban development and affects the quantity and quality of urban floods. Generally, flood management include planning and management activities to reduce the harmful effects of floods on people, environment and economy is in a region. In recent years, a concept called Best Management Practices (BMPs) has been widely used for urban flood control from both quality and quantity aspects. In this paper, three objective functions relating to the quality of runoff (including BOD5 and TSS parameters), the quantity of runoff (including runoff volume produced at each sub-basin) and expenses (including construction and maintenance costs of BMPs) were employed in the optimization algorithm aimed at finding optimal solution MOPSO and NSGAII optimization methods were coupled with the SWMM urban runoff simulation model. In the proposed structure for NSGAII algorithm, a continuous structure and intermediate crossover was used because they perform better for improving the optimization model efficiency. To compare the performance of the two optimization algorithms, a number of statistical indicators were computed for the last generation of solutions. Comparing the pareto solution resulted from each of the optimization algorithms indicated that the NSGAII solutions was more optimal. Moreover, the standard deviation of solutions in the last generation had no significant differences in comparison with MOPSO.
Directory of Open Access Journals (Sweden)
Rajesh Kumar
2016-06-01
Full Text Available Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is selected using Fuzzy, TOPSIS, LINMAP and Shannon’s entropy decision making methods. Triple objective evolutionary approach applied to the proposed model gives power output, thermal efficiency, ecological function as (53.89 kW, 0.1611, −142 kW which are 29.78%, 25.86% and 21.13% lower in comparison with reversible system. Furthermore, the present study reflects the effect of various heat capacitance rates and component efficiencies on triple objectives in graphical custom. Finally, with the aim of error investigation, average and maximum errors of obtained results are computed.
Directory of Open Access Journals (Sweden)
Ping Jiang
2015-01-01
Full Text Available To mitigate the increase of anxiety resulting from the depletion of fossil fuels and destruction of the ecosystem, wind power, as the most common renewable energy, is a flourishing industry. Thus, accurate wind speed forecasting is critical for the efficient function of wind farms. However, affected by complicated influence factors in meteorology and volatile physical property, wind speed forecasting is difficult and challenging. Based on previous research efforts, an intelligent hybrid model was proposed in this paper in an attempt to tackle this difficult task. First, wavelet transform was utilized to extract the main components of the original wind speed data while eliminating noise. To make better use of the back-propagation artificial neural network, the initial parameters of the network are substituted with optimized ones, which are achieved by using the artificial fish swarm algorithm (AFSA, and the final combination model is employed to conduct wind speed forecasting. A series of data are collected from four different observation sites to test the validity of the proposed model. Through comprehensive comparison with the traditional models, the experiment results clearly indicate that the proposed hybrid model outperforms the traditional single models.
Directory of Open Access Journals (Sweden)
Xuejiao Ma
2016-08-01
Full Text Available Big data mining, analysis, and forecasting play vital roles in modern economic and industrial fields, especially in the energy system. Inaccurate forecasting may cause wastes of scarce energy or electricity shortages. However, forecasting in the energy system has proven to be a challenging task due to various unstable factors, such as high fluctuations, autocorrelation and stochastic volatility. To forecast time series data by using hybrid models is a feasible alternative of conventional single forecasting modelling approaches. This paper develops a group of hybrid models to solve the problems above by eliminating the noise in the original data sequence and optimizing the parameters in a back propagation neural network. One of contributions of this paper is to integrate the existing algorithms and models, which jointly show advances over the present state of the art. The results of comparative studies demonstrate that the hybrid models proposed not only satisfactorily approximate the actual value but also can be an effective tool in the planning and dispatching of smart grids.
Directory of Open Access Journals (Sweden)
Zongxi Qu
2016-01-01
Full Text Available As a type of clean and renewable energy, the superiority of wind power has increasingly captured the world’s attention. Reliable and precise wind speed prediction is vital for wind power generation systems. Thus, a more effective and precise prediction model is essentially needed in the field of wind speed forecasting. Most previous forecasting models could adapt to various wind speed series data; however, these models ignored the importance of the data preprocessing and model parameter optimization. In view of its importance, a novel hybrid ensemble learning paradigm is proposed. In this model, the original wind speed data is firstly divided into a finite set of signal components by ensemble empirical mode decomposition, and then each signal is predicted by several artificial intelligence models with optimized parameters by using the fruit fly optimization algorithm and the final prediction values were obtained by reconstructing the refined series. To estimate the forecasting ability of the proposed model, 15 min wind speed data for wind farms in the coastal areas of China was performed to forecast as a case study. The empirical results show that the proposed hybrid model is superior to some existing traditional forecasting models regarding forecast performance.
Hoda, M Raschid; Grimm, Michael; Laufer, Guenther
2005-11-01
Artificial intelligence (AI)-based computation methods have been recently shown to be applicable in several clinical diagnostic fields. The purpose of this study was to introduce a novel AI method called evolutionary algorithms (EAs) to clinical predictions. The technique was used to create a pharmacokinetic model for the prediction of whole blood levels of cyclosporine (CyA). One hundred one adult cardiac transplant recipients were randomly selected and included in this study. All patients had been receiving oral cyclosporine twice daily, and the trough levels in whole blood were measured by monoclonal-specific radioimmunoassay. An evolutionary algorithm (EA)-based software tool was trained with pre- and post-operative variables from 64 patients. The results of this process were then tested on data sets from 37 patients. The mean value of the predicted CyA level throughout the measurement period for the test data was 175 +/- 27 ng/ml, which compared well with the mean observed CyA level of 180 +/- 31 ng/ml. The system bias expressed as the mean percent error (MPE) for the training and test data sets were 7.1 +/- 5.4% (0.1% to 26.7%) and 8.0 +/- 6.7% (0.8% to 28.8%), respectively. The prediction accuracy ranged from 80% to 90%. The correlation coefficient between predicted and observed CyA concentration for the training data were 0.93 (p cyclosporine whole blood levels in heart transplant recipients. This and other similar technologies should be considered as future clinical tools to reduce costs in our health systems.
Bator, Marcin; Nieniewski, Mariusz
2012-02-01
Optimization of brightness distribution in the template used for detection of cancerous masses in mammograms by means of correlation coefficient is presented. This optimization is performed by the evolutionary algorithm using an auxiliary mass classifier. Brightness along the radius of the circularly symmetric template is coded indirectly by its second derivative. The fitness function is defined as the area under curve (AUC) of the receiver operating characteristic (ROC) for the mass classifier. The ROC and AUC are obtained for a teaching set of regions of interest (ROIs), for which it is known whether a ROI is true-positive (TP) or false-positive (F). The teaching set is obtained by running the mass detector using a template with a predetermined brightness. Subsequently, the evolutionary algorithm optimizes the template by classifying masses in the teaching set. The optimal template (OT) can be used for detection of masses in mammograms with unknown ROIs. The approach was tested on the training and testing sets of the Digital Database for Screening Mammography (DDSM). The free-response receiver operating characteristic (FROC) obtained with the new mass detector seems superior to the FROC for the hemispherical template (HT). Exemplary results are the following: in the case of the training set in the DDSM, the true-positive fraction (TPF) = 0.82 for the OT and 0.79 for the HT; in the case of the testing set, TPF = 0.79 for the OT and 0.72 for the HT. These values were obtained for disease cases, and the false-positive per image (FPI) = 2.
Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad
2017-06-01
Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Couceiro, Micael
2015-01-01
This book examines the bottom-up applicability of swarm intelligence to solving multiple problems, such as curve fitting, image segmentation, and swarm robotics. It compares the capabilities of some of the better-known bio-inspired optimization approaches, especially Particle Swarm Optimization (PSO), Darwinian Particle Swarm Optimization (DPSO) and the recently proposed Fractional Order Darwinian Particle Swarm Optimization (FODPSO), and comprehensively discusses their advantages and disadvantages. Further, it demonstrates the superiority and key advantages of using the FODPSO algorithm, suc
Directory of Open Access Journals (Sweden)
Mahesh S. Narkhede
2015-01-01
Full Text Available An attempt has been made in this article to compare the performances of two multiobjective evolutionary algorithms namely ev-MOGA and GODLIKE. The performances of both are evaluated on risk based optimal power scheduling of virtual power plant. The risk based scheduling is proposed as a conflicting bi objective optimization problem with increased number of durations of day. Both the algorithms are elaborated in detail. Results based on the performance analysis are depicted at the end.
Directory of Open Access Journals (Sweden)
G.Subashini
2010-07-01
Full Text Available To meet the increasing computational demands, geographically distributed resources need to be logically coupled to make them work as a unified resource. In analyzing the performance of such distributed heterogeneous computing systems scheduling a set of tasks to the available set of resources for execution is highly important. Task scheduling being an NP-complete problem, use of metaheuristics is more appropriate in obtaining optimal solutions. Schedules thus obtained can be evaluated using several criteria that may conflict with one another which require multi objective problem formulation. This paper investigates the application of an elitist Nondominated Sorting Genetic Algorithm (NSGA-II, to efficiently schedule a set of independent tasks in a heterogeneous distributed computing system. The objectives considered in this paper include minimizing makespan and average flowtime simultaneously. The implementation of NSGA-II algorithm and Weighted-Sum Genetic Algorithm (WSGA has been tested on benchmark instances for distributed heterogeneous systems. As NSGA-II generates a set of Pareto optimal solutions, to verify the effectiveness of NSGA-II over WSGA a fuzzy based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto solution set.
Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.
Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana
2017-11-01
RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Wiemels Joseph
2008-09-01
Full Text Available Abstract Background Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in DNA sequence. One of the most commonly studied epigenetic alterations is cytosine methylation, which is a well recognized mechanism of epigenetic gene silencing and often occurs at tumor suppressor gene loci in human cancer. Arrays are now being used to study DNA methylation at a large number of loci; for example, the Illumina GoldenGate platform assesses DNA methylation at 1505 loci associated with over 800 cancer-related genes. Model-based cluster analysis is often used to identify DNA methylation subgroups in data, but it is unclear how to cluster DNA methylation data from arrays in a scalable and reliable manner. Results We propose a novel model-based recursive-partitioning algorithm to navigate clusters in a beta mixture model. We present simulations that show that the method is more reliable than competing nonparametric clustering approaches, and is at least as reliable as conventional mixture model methods. We also show that our proposed method is more computationally efficient than conventional mixture model approaches. We demonstrate our method on the normal tissue samples and show that the clusters are associated with tissue type as well as age. Conclusion Our proposed recursively-partitioned mixture model is an effective and computationally efficient method for clustering DNA methylation data.
Kligerman, Seth; Lahiji, Kian; Weihe, Elizabeth; Lin, Cheng Tin; Terpenning, Silanath; Jeudy, Jean; Frazier, Annie; Pugatch, Robert; Galvin, Jeffrey R; Mittal, Deepika; Kothari, Kunal; White, Charles S
2015-01-01
The purpose of the study was to determine whether a model-based iterative reconstruction (MBIR) technique improves diagnostic confidence and detection of pulmonary embolism (PE) compared with hybrid iterative reconstruction (HIR) and filtered back projection (FBP) reconstructions in patients undergoing computed tomography pulmonary angiography. The study was approved by our institutional review board. Fifty patients underwent computed tomography pulmonary angiography at 100 kV using standard departmental protocols. Twenty-two of 50 patients had studies positive for PE. All 50 studies were reconstructed using FBP, HIR, and MBIR. After image randomization, 5 thoracic radiologists and 2 thoracic radiology fellows graded each study on a scale of 1 (very poor) to 5 (ideal) in 4 subjective categories: diagnostic confidence, noise, pulmonary artery enhancement, and plastic appearance. Readers assessed each study for the presence of PE. Parametric and nonparametric data were analyzed with repeated measures and Friedman analysis of variance, respectively. For the 154 positive studies (7 readers × 22 positive studies), pooled sensitivity for detection of PE was 76% (117/154), 78.6% (121/154), and 82.5% (127/154) using FBP, HIR, and MBIR, respectively. PE detection was significantly higher using MBIR compared with FBP (P = 0.016) and HIR (P = 0.046). Because of nonsignificant increase in FP studies using HIR and MBIR, accuracy with MBIR (88.6%), HIR (87.1%), and FBP (87.7%) was similar. Compared with FBP, MBIR led to a significant subjective increase in diagnostic confidence, noise, and enhancement in 6/7, 6/7, and 7/7 readers, respectively. Compared with HIR, MBIR led to significant subjective increase in diagnostic confidence, noise, and enhancement in 5/7, 5/7, and 7/7 readers, respectively. MBIR led to a subjective increase in plastic appearance in all 7 readers compared with both FBP and HIR. MBIR led to significant increase in PE detection compared with FBP and HIR
Osaba, E; Carballedo, R; Diaz, F; Onieva, E; de la Iglesia, I; Perallos, A
2014-01-01
Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.
Karkra, Rashmi; Kumar, Prashant; Bansod, Baban K. S.; Bagchi, Sudeshna; Sharma, Pooja; Krishna, C. Rama
2017-11-01
Access to potable water for the common people is one of the most challenging tasks in the present era. Contamination of drinking water has become a serious problem due to various anthropogenic and geogenic events. The paper demonstrates the application of evolutionary algorithms, viz., particle swan optimization and genetic algorithm to 24 water samples containing eight different heavy metal ions (Cd, Cu, Co, Pb, Zn, Ar, Cr and Ni) for the optimal estimation of electrode and frequency to classify the heavy metal ions. The work has been carried out on multi-variate data, viz., single electrode multi-frequency, single frequency multi-electrode and multi-frequency multi-electrode water samples. The electrodes used are platinum, gold, silver nanoparticles and glassy carbon electrodes. Various hazardous metal ions present in the water samples have been optimally classified and validated by the application of Davis Bouldin index. Such studies are useful in the segregation of hazardous heavy metal ions found in water resources, thereby quantifying the degree of water quality.
Directory of Open Access Journals (Sweden)
E. Osaba
2014-01-01
Full Text Available Since their first formulation, genetic algorithms (GAs have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.
Directory of Open Access Journals (Sweden)
Yanxia Shen
2018-01-01
Full Text Available The intermittency of renewable energy will increase the uncertainty of the power system, so it is necessary to predict the short-term wind power, after which the electrical power system can operate reliably and safely. Unlike the traditional point forecasting, the purpose of this study is to quantify the potential uncertainties of wind power and to construct prediction intervals (PIs and prediction models using wavelet neural network (WNN. Lower upper bound estimation (LUBE of the PIs is achieved by minimizing a multi-objective function covering both interval width and coverage probabilities. Considering the influence of the points out of the PIs to shorten the width of PIs without compromising coverage probability, a new, improved, multi-objective artificial bee colony (MOABC algorithm combining multi-objective evolutionary knowledge, called EKMOABC, is proposed for the optimization of the forecasting model. In this paper, some comparative simulations are carried out and the results show that the proposed model and algorithm can achieve higher quality PIs for wind power forecasting. Taking into account the intermittency of renewable energy, such a type of wind power forecast can actually provide a more reliable reference for dispatching of the power system.
Directory of Open Access Journals (Sweden)
Mengjun Ming
2017-05-01
Full Text Available Due to the scarcity of conventional energy resources and the greenhouse effect, renewable energies have gained more attention. This paper proposes methods for multi-objective optimal design of hybrid renewable energy system (HRES in both isolated-island and grid-connected modes. In each mode, the optimal design aims to find suitable configurations of photovoltaic (PV panels, wind turbines, batteries and diesel generators in HRES such that the system cost and the fuel emission are minimized, and the system reliability/renewable ability (corresponding to different modes is maximized. To effectively solve this multi-objective problem (MOP, the multi-objective evolutionary algorithm based on decomposition (MOEA/D using localized penalty-based boundary intersection (LPBI method is proposed. The algorithm denoted as MOEA/D-LPBI is demonstrated to outperform its competitors on the HRES model as well as a set of benchmarks. Moreover, it effectively obtains a good approximation of Pareto optimal HRES configurations. By further considering a decision maker’s preference, the most satisfied configuration of the HRES can be identified.
Directory of Open Access Journals (Sweden)
Bi Liang
2017-01-01
Full Text Available The chain supermarket has become a major part of China’s retail industry, and the optimization of chain supermarkets’ distribution route is an important issue that needs to be considered for the distribution center, because for a chain supermarket it affects the logistics cost and the competition in the market directly. In this paper, analyzing the current distribution situation of chain supermarkets both at home and abroad and studying the quantum-inspired evolutionary algorithm (QEA, we set up the mathematical model of chain supermarkets’ distribution route and solve the optimized distribution route throughout QEA. At last, we take Hongqi Chain Supermarket in Chengdu as an example to perform the experiment and compare QEA with the genetic algorithm (GA in the fields of the convergence, the optimal solution, the search ability, and so on. The experiment results show that the distribution route optimized by QEA behaves better than that by GA, and QEA has stronger global search ability for both a small-scale chain supermarket and a large-scale chain supermarket. Moreover, the success rate of QEA in searching routes is higher than that of GA.
Rocha, Frederico AE; Lourenço, Nuno CC; Horta, Nuno CG
2013-01-01
This book applies to the scientific area of electronic design automation (EDA) and addresses the automatic sizing of analog integrated circuits (ICs). Particularly, this book presents an approach to enhance a state-of-the-art layout-aware circuit-level optimizer (GENOM-POF), by embedding statistical knowledge from an automatically generated gradient model into the multi-objective multi-constraint optimization kernel based on the NSGA-II algorithm. The results showed allow the designer to explore the different trade-offs of the solution space, both through the achieved device sizes, or the resp
Optimized smart grid energy procurement for LTE networks using evolutionary algorithms
Ghazzai, Hakim
2014-11-01
Energy efficiency aspects in cellular networks can contribute significantly to reducing worldwide greenhouse gas emissions. The base station (BS) sleeping strategy has become a well-known technique to achieve energy savings by switching off redundant BSs mainly for lightly loaded networks. Moreover, introducing renewable energy as an alternative power source has become a real challenge among network operators. In this paper, we formulate an optimization problem that aims to maximize the profit of Long-Term Evolution (LTE) cellular operators and to simultaneously minimize the CO2 emissions in green wireless cellular networks without affecting the desired quality of service (QoS). The BS sleeping strategy lends itself to an interesting implementation using several heuristic approaches, such as the genetic (GA) and particle swarm optimization (PSO) algorithms. In this paper, we propose GA-based and PSO-based methods that reduce the energy consumption of BSs by not only shutting down underutilized BSs but by optimizing the amounts of energy procured from different retailers (renewable energy and electricity retailers), as well. A comparison with another previously proposed algorithm is also carried out to evaluate the performance and the computational complexity of the employed methods.
Optimal Management Of Renewable-Based Mgs An Intelligent Approach Through The Evolutionary Algorithm
Directory of Open Access Journals (Sweden)
Mehdi Nafar
2015-08-01
Full Text Available Abstract- This article proposes a probabilistic frame built on Scenario fabrication to considerate the uncertainties in the finest action managing of Micro Grids MGs. The MG contains different recoverable energy resources such as Wind Turbine WT Micro Turbine MT Photovoltaic PV Fuel Cell FC and one battery as the storing device. The advised frame is based on scenario generation and Roulette wheel mechanism to produce different circumstances for handling the uncertainties of altered factors. It habits typical spreading role as a probability scattering function of random factors. The uncertainties which are measured in this paper are grid bid alterations cargo request calculating error and PV and WT yield power productions. It is well-intentioned to asset that solving the MG difficult for 24 hours of a day by considering diverse uncertainties and different constraints needs one powerful optimization method that can converge fast when it doesnt fall in local optimal topic. Simultaneously single Group Search Optimization GSO system is presented to vision the total search space globally. The GSO algorithm is instigated from group active of beasts. Also the GSO procedure one change is similarly planned for this algorithm. The planned context and way is applied o one test grid-connected MG as a typical grid.
The Evolutionary Algorithm to Find Robust Pareto-Optimal Solutions over Time
Directory of Open Access Journals (Sweden)
Meirong Chen
2015-01-01
Full Text Available In dynamic multiobjective optimization problems, the environmental parameters change over time, which makes the true pareto fronts shifted. So far, most works of research on dynamic multiobjective optimization methods have concentrated on detecting the changed environment and triggering the population based optimization methods so as to track the moving pareto fronts over time. Yet, in many real-world applications, it is not necessary to find the optimal nondominant solutions in each dynamic environment. To solve this weakness, a novel method called robust pareto-optimal solution over time is proposed. It is in fact to replace the optimal pareto front at each time-varying moment with the series of robust pareto-optimal solutions. This means that each robust solution can fit for more than one time-varying moment. Two metrics, including the average survival time and average robust generational distance, are present to measure the robustness of the robust pareto solution set. Another contribution is to construct the algorithm framework searching for robust pareto-optimal solutions over time based on the survival time. Experimental results indicate that this definition is a more practical and time-saving method of addressing dynamic multiobjective optimization problems changing over time.
An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu
2016-07-01
This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.
AN EVOLUTIONARY ALGORITHM FOR CHANNEL ASSIGNMENT PROBLEM IN WIRELESS MOBILE NETWORKS
Directory of Open Access Journals (Sweden)
Yee Shin Chia
2012-12-01
Full Text Available The channel assignment problem in wireless mobile network is the assignment of appropriate frequency spectrum to incoming calls while maintaining a satisfactory level of electromagnetic compatibility (EMC constraints. An effective channel assignment strategy is important due to the limited capacity of frequency spectrum in wireless mobile network. Most of the existing channel assignment strategies are based on deterministic methods. In this paper, an adaptive genetic algorithm (GA based channel assignment strategy is introduced for resource management and to reduce the effect of EMC interferences. The most significant advantage of the proposed optimization method is its capability to handle both the reassignment of channels for existing calls as well as the allocation of channel to a new incoming call in an adaptive process to maximize the utility of the limited resources. It is capable to adapt the population size to the number of eligible channels for a particular cell upon new call arrivals to achieve reasonable convergence speed. The MATLAB simulation on a 49-cells network model for both uniform and nonuniform call traffic distributions showed that the proposed channel optimization method can always achieve a lower average new incoming call blocking probability compared to the deterministic based channel assignment strategy.
Santos, José; Monteagudo, Ángel
2017-03-27
The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.
Directory of Open Access Journals (Sweden)
M. L. Seto
2012-01-01
Full Text Available The objective is to show that on-board mission replanning for an AUV sensor coverage mission, based on available energy, enhances mission success. Autonomous underwater vehicles (AUVs are tasked to increasingly long deployments, consequently energy management issues are timely and relevant. Energy shortages can occur if the AUV unexpectedly travels against stronger currents, is not trimmed for the local water salinity has to get back on course, and so forth. An on-board knowledge-based agent, based on a genetic algorithm, was designed and validated to replan a near-optimal AUV survey mission. It considers the measured AUV energy consumption, attitudes, speed over ground, and known response to proposed missions through on-line dynamics and control predictions. For the case studied, the replanned mission improves the survey area coverage by a factor of 2 for an energy budget, that is, a factor of 2 less than planned. The contribution is a novel on-board cognitive capability in the form of an agent that monitors the energy and intelligently replans missions based on energy considerations with evolutionary methods.
Energy Technology Data Exchange (ETDEWEB)
Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)
2016-10-15
In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.
Directory of Open Access Journals (Sweden)
Abdarrazak OUALI
2011-12-01
Full Text Available Because their capability to change the network parameters with a rapid response and enhanced flexibility, flexible AC transmission system (FACTS devices have taken more attention in power systems operations as improvement of voltage profile and minimizing system losses. In this way, this paper presents a multi-objective evolutionary algorithm (MOEA to solve optimal reactive power dispatch (ORPD problem with FACTS devices. This nonlinear multi-objective problem (MOP consists to minimize simultaneously real power loss in transmission lines and voltage deviation at load buses, by tuning parameters and searching the location of FACTS devices. The constraints of this MOP are divided to equality constraints represented by load flow equations and inequality constraints such as, generation reactive power sources and security limits at load buses. Two types of FACTS devices, static synchronous series compensator (SSSC and unified power flow controller (UPFC are considered. A comparative study regarding the effects of an SSSC and an UPFC on voltage deviation and total transmission real losses is carried out. The design problem is tested on a 6-bus system.
Directory of Open Access Journals (Sweden)
Lucas Cuadra
2017-07-01
Full Text Available In this work, we describe an approach that allows for optimizing the structure of a smart grid (SG with renewable energy (RE generation against abnormal conditions (imbalances between generation and consumption, overloads or failures arising from the inherent SG complexity by combining the complex network (CN and evolutionary algorithm (EA concepts. We propose a novel objective function (to be minimized that combines cost elements, related to the number of electric cables, and several metrics that quantify properties that are beneficial for SGs (energy exchange at the local scale and high robustness and resilience. The optimized SG structure is obtained by applying an EA in which the chromosome that encodes each potential network (or individual is the upper triangular matrix of its adjacency matrix. This allows for fully tailoring the crossover and mutation operators. We also propose a domain-specific initial population that includes both small-world and random networks, helping the EA converge quickly. The experimental work points out that the proposed method works well and generates the optimum, synthetic, small-world structure that leads to beneficial properties such as improving both the local energy exchange and the robustness. The optimum structure fulfills a balance between moderate cost and robustness against abnormal conditions. Our approach should be considered as an analysis, planning and decision-making tool to gain insight into smart grid structures so that the low level detailed design is carried out by using electrical engineering techniques.
Directory of Open Access Journals (Sweden)
Rahman Gharari
2016-10-01
Full Text Available In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II, is developed for the burnable poison placement (BPP optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (Keff for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.
Kontoleontos, E.; Weissenberger, S.
2016-11-01
In order to be able to predict the maximum Annual Energy Production (AEP) for tidal power plants, an advanced AEP optimization procedure is required for solving the optimization problem which consists of a high number of design variables and constraints. This efficient AEP optimization procedure requires an advanced optimization tool (EASY software) and an AEP calculation tool that can simulate all different operating modes of the units (bidirectional turbine, pump and sluicing mode). The EASY optimization software is a metamodel-assisted Evolutionary Algorithm (MAEA) that can be used in both single- and multi-objective optimization problems. The AEP calculation tool, developed by ANDRITZ HYDRO, in combination with EASY is used to maximize the tidal annual energy produced by optimizing the plant operation throughout the year. For the Swansea Bay Tidal Power Plant project, the AEP optimization along with the hydraulic design optimization and the model testing was used to evaluate all different hydraulic and operating concepts and define the optimal concept that led to a significant increase of the AEP value. This new concept of a triple regulated “bi-directional bulb pump turbine” for Swansea Bay Tidal Power Plant (16 units, nominal power above 320 MW) along with its AEP optimization scheme will be presented in detail in the paper. Furthermore, the use of an online AEP optimization during operation of the power plant, that will provide the optimal operating points to the control system, will be also presented.
Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib
2017-10-01
Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kotegawa, Tatsuya
Complexity in the Air Transportation System (ATS) arises from the intermingling of many independent physical resources, operational paradigms, and stakeholder interests, as well as the dynamic variation of these interactions over time. Currently, trade-offs and cost benefit analyses of new ATS concepts are carried out on system-wide evaluation simulations driven by air traffic forecasts that assume fixed airline routes. However, this does not well reflect reality as airlines regularly add and remove routes. A airline service route network evolution model that projects route addition and removal was created and combined with state-of-the-art air traffic forecast methods to better reflect the dynamic properties of the ATS in system-wide simulations. Guided by a system-of-systems framework, network theory metrics and machine learning algorithms were applied to develop the route network evolution models based on patterns extracted from historical data. Constructing the route addition section of the model posed the greatest challenge due to the large pool of new link candidates compared to the actual number of routes historically added to the network. Of the models explored, algorithms based on logistic regression, random forests, and support vector machines showed best route addition and removal forecast accuracies at approximately 20% and 40%, respectively, when validated with historical data. The combination of network evolution models and a system-wide evaluation tool quantified the impact of airline route network evolution on air traffic delay. The expected delay minutes when considering network evolution increased approximately 5% for a forecasted schedule on 3/19/2020. Performance trade-off studies between several airline route network topologies from the perspectives of passenger travel efficiency, fuel burn, and robustness were also conducted to provide bounds that could serve as targets for ATS transformation efforts. The series of analysis revealed that high
Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F
2016-10-07
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
Indian Academy of Sciences (India)
positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.
Directory of Open Access Journals (Sweden)
Rudy Clausen
2015-09-01
Full Text Available An important goal in molecular biology is to understand functional changes upon single-point mutations in proteins. Doing so through a detailed characterization of structure spaces and underlying energy landscapes is desirable but continues to challenge methods based on Molecular Dynamics. In this paper we propose a novel algorithm, SIfTER, which is based instead on stochastic optimization to circumvent the computational challenge of exploring the breadth of a protein's structure space. SIfTER is a data-driven evolutionary algorithm, leveraging experimentally-available structures of wildtype and variant sequences of a protein to define a reduced search space from where to efficiently draw samples corresponding to novel structures not directly observed in the wet laboratory. The main advantage of SIfTER is its ability to rapidly generate conformational ensembles, thus allowing mapping and juxtaposing landscapes of variant sequences and relating observed differences to functional changes. We apply SIfTER to variant sequences of the H-Ras catalytic domain, due to the prominent role of the Ras protein in signaling pathways that control cell proliferation, its well-studied conformational switching, and abundance of documented mutations in several human tumors. Many Ras mutations are oncogenic, but detailed energy landscapes have not been reported until now. Analysis of SIfTER-computed energy landscapes for the wildtype and two oncogenic variants, G12V and Q61L, suggests that these mutations cause constitutive activation through two different mechanisms. G12V directly affects binding specificity while leaving the energy landscape largely unchanged, whereas Q61L has pronounced, starker effects on the landscape. An implementation of SIfTER is made available at http://www.cs.gmu.edu/~ashehu/?q=OurTools. We believe SIfTER is useful to the community to answer the question of how sequence mutations affect the function of a protein, when there is an
Kirchner-Bossi, Nicolas; Porté-Agel, Fernando
2017-04-01
Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.
Evolutionary Information Theory
Directory of Open Access Journals (Sweden)
Mark Burgin
2013-04-01
Full Text Available Evolutionary information theory is a constructive approach that studies information in the context of evolutionary processes, which are ubiquitous in nature and society. In this paper, we develop foundations of evolutionary information theory, building several measures of evolutionary information and obtaining their properties. These measures are based on mathematical models of evolutionary computations, machines and automata. To measure evolutionary information in an invariant form, we construct and study universal evolutionary machines and automata, which form the base for evolutionary information theory. The first class of measures introduced and studied in this paper is evolutionary information size of symbolic objects relative to classes of automata or machines. In particular, it is proved that there is an invariant and optimal evolutionary information size relative to different classes of evolutionary machines. As a rule, different classes of algorithms or automata determine different information size for the same object. The more powerful classes of algorithms or automata decrease the information size of an object in comparison with the information size of an object relative to weaker4 classes of algorithms or machines. The second class of measures for evolutionary information in symbolic objects is studied by introduction of the quantity of evolutionary information about symbolic objects relative to a class of automata or machines. To give an example of applications, we briefly describe a possibility of modeling physical evolution with evolutionary machines to demonstrate applicability of evolutionary information theory to all material processes. At the end of the paper, directions for future research are suggested.
Indian Academy of Sciences (India)
In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...
Energy Technology Data Exchange (ETDEWEB)
Samei, Ehsan, E-mail: samei@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Clinical Imaging Physics Group, Departments of Radiology, Physics, Biomedical Engineering, and Electrical and Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina 27710 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University, Durham, North Carolina 27710 (United States)
2015-01-15
Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR
National Aeronautics and Space Administration — This article presented a discussion on uncertainty representation and management for model-based prog- nostics methodologies based on the Bayesian tracking framework...
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
Part E: Evolutionary Computation
DEFF Research Database (Denmark)
2015-01-01
evolutionary algorithms, such as memetic algorithms, which have emerged as a very promising tool for solving many real-world problems in a multitude of areas of science and technology. Moreover, parallel evolutionary combinatorial optimization has been presented. Search operators, which are crucial in all...
Indian Academy of Sciences (India)
, i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.
Ma, Yunzhi; Vijande, Javier; Ballester, Facundo; Tedgren, Åsa Carlsson; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Zourari, Kyveli; Papagiannis, Panagiotis; Rivard, Mark J; Siebert, Frank André; Sloboda, Ron S; Smith, Ryan; Chamberland, Marc J P; Thomson, Rowan M; Verhaegen, Frank; Beaulieu, Luc
2017-11-01
A joint working group was created by the American Association of Physicists in Medicine (AAPM), the European Society for Radiotherapy and Oncology (ESTRO), and the Australasian Brachytherapy Group (ABG) with the charge, among others, to develop a set of well-defined test case plans and perform calculations and comparisons with model-based dose calculation algorithms (MBDCAs). Its main goal is to facilitate a smooth transition from the AAPM Task Group No. 43 (TG-43) dose calculation formalism, widely being used in clinical practice for brachytherapy, to the one proposed by Task Group No. 186 (TG-186) for MBDCAs. To do so, in this work a hypothetical, generic high-dose rate (HDR) 192 Ir shielded applicator has been designed and benchmarked. A generic HDR 192 Ir shielded applicator was designed based on three commercially available gynecological applicators as well as a virtual cubic water phantom that can be imported into any DICOM-RT compatible treatment planning system (TPS). The absorbed dose distribution around the applicator with the TG-186 192 Ir source located at one dwell position at its center was computed using two commercial TPSs incorporating MBDCAs (Oncentra® Brachy with Advanced Collapsed-cone Engine, ACE™, and BrachyVision ACUROS™) and state-of-the-art Monte Carlo (MC) codes, including ALGEBRA, BrachyDose, egs_brachy, Geant4, MCNP6, and Penelope2008. TPS-based volumetric dose distributions for the previously reported "source centered in water" and "source displaced" test cases, and the new "source centered in applicator" test case, were analyzed here using the MCNP6 dose distribution as a reference. Volumetric dose comparisons of TPS results against results for the other MC codes were also performed. Distributions of local and global dose difference ratios are reported. The local dose differences among MC codes are comparable to the statistical uncertainties of the reference datasets for the "source centered in water" and "source displaced" test
Energy Technology Data Exchange (ETDEWEB)
Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve
2007-06-15
This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue
Suchá, Dominika; Willemink, Martin J.; de Jong, Pim A.; Schilham, Arnold M. R.; Leiner, Tim; Symersky, Petr; Budde, Ricardo P. J.
2014-01-01
To assess the impact of hybrid iterative reconstruction (IR) and novel model-based iterative reconstruction (IMR) and dose reduction on prosthetic heart valve (PHV) related artifacts and objective image quality. One transcatheter and two mechanical PHVs were embedded in diluted contrast-gel,
Cao, Buwen; Luo, Jiawei; Liang, Cheng; Wang, Shulin; Song, Dan
2015-10-01
The identification of protein complexes in protein-protein interaction (PPI) networks has greatly advanced our understanding of biological organisms. Existing computational methods to detect protein complexes are usually based on specific network topological properties of PPI networks. However, due to the inherent complexity of the network structures, the identification of protein complexes may not be fully addressed by using single network topological property. In this study, we propose a novel MultiObjective Evolutionary Programming Genetic Algorithm (MOEPGA) which integrates multiple network topological features to detect biologically meaningful protein complexes. Our approach first systematically analyzes the multiobjective problem in terms of identifying protein complexes from PPI networks, and then constructs the objective function of the iterative algorithm based on three common topological properties of protein complexes from the benchmark dataset, finally we describe our algorithm, which mainly consists of three steps, population initialization, subgraph mutation and subgraph selection operation. To show the utility of our method, we compared MOEPGA with several state-of-the-art algorithms on two yeast PPI datasets. The experiment results demonstrate that the proposed method can not only find more protein complexes but also achieve higher accuracy in terms of fscore. Moreover, our approach can cover a certain number of proteins in the input PPI network in terms of the normalized clustering score. Taken together, our method can serve as a powerful framework to detect protein complexes in yeast PPI networks, thereby facilitating the identification of the underlying biological functions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Šime Ukić
2013-01-01
Full Text Available Gradient ion chromatography was used for the separation of eight sugars: arabitol, cellobiose, fructose, fucose, lactulose, melibiose, N-acetyl-D-glucosamine, and raffinose. The separation method was optimized using a combination of simplex or genetic algorithm with the isocratic-to-gradient retention modeling. Both the simplex and genetic algorithms provided well separated chromatograms in a similar analysis time. However, the simplex methodology showed severe drawbacks when dealing with local minima. Thus the genetic algorithm methodology proved as a method of choice for gradient optimization in this case. All the calculated/predicted chromatograms were compared with the real sample data, showing more than a satisfactory agreement.
Battiti, Roberto; Passerini, Andrea
2009-01-01
The centrality of the decision maker (DM) is widely recognized in the Multiple Criteria Decision Making community. This translates into emphasis on seamless human-computer interaction, and adaptation of the solution technique to the knowledge which is progressively acquired from the DM. This paper adopts the methodology of Reactive Optimization(RO) for evolutionary interactive multi-objective optimization. RO follows to the paradigm of "learning while optimizing", through the use of online ma...
Directory of Open Access Journals (Sweden)
WoonSeong Jeong
2016-01-01
Full Text Available This paper presents an algorithm to translate building topology in an object-oriented architectural building model (Building Information Modeling, BIM into an object-oriented physical-based energy performance simulation by using an object-oriented programming approach. Our algorithm demonstrates efficient mapping of building components in a BIM model into space boundary conditions in an object-oriented physical modeling (OOPM-based building energy model, and the translation of building topology into space boundary conditions to create an OOPM model. The implemented command, TranslatingBuildingTopology, using an object-oriented programming approach, enables graphical representation of the building topology of BIM models and the automatic generation of space boundaries information for OOPM models. The algorithm and its implementation allow coherent object-mapping from BIM to OOPM and facilitate the definition of space boundaries information during model translation for building thermal simulation. In order to demonstrate our algorithm and its implementation, we conducted experiments with three test cases using the BESTEST 600 model. Our experiments show that our algorithm and its implementation enable building topology information to be automatically translated into space boundary information, and facilitates the reuse of BIM data into building thermal simulations without additional export or import processes.
Indian Academy of Sciences (India)
Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.
Indian Academy of Sciences (India)
number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.
Directory of Open Access Journals (Sweden)
Ping Jiang
2017-07-01
Full Text Available Wind speed forecasting has an unsuperseded function in the high-efficiency operation of wind farms, and is significant in wind-related engineering studies. Back-propagation (BP algorithms have been comprehensively employed to forecast time series that are nonlinear, irregular, and unstable. However, the single model usually overlooks the importance of data pre-processing and parameter optimization of the model, which results in weak forecasting performance. In this paper, a more precise and robust model that combines data pre-processing, BP neural network, and a modified artificial intelligence optimization algorithm was proposed, which succeeded in avoiding the limitations of the individual algorithm. The novel model not only improves the forecasting accuracy but also retains the advantages of the firefly algorithm (FA and overcomes the disadvantage of the FA while optimizing in the later stage. To verify the forecasting performance of the presented hybrid model, 10-min wind speed data from Penglai city, Shandong province, China, were analyzed in this study. The simulations revealed that the proposed hybrid model significantly outperforms other single metaheuristics.
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Energy Technology Data Exchange (ETDEWEB)
Guerra, J.G., E-mail: jglezg2002@gmail.es [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Rubiano, J.G. [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Instituto Universitario de Estudios Ambientales y Recursos Naturales, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Winter, G. [Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en la Ingeniería, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Martel, P. [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Instituto Universitario de Estudios Ambientales y Recursos Naturales, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Bolivar, J.P. [Departamento de Física Aplicada, Universidad de Huelva, 21071 Huelva (Spain)
2017-06-21
In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs. - Highlights: • A computational method for characterizing HPGe detectors has been generalized. • The new version is usable for a wider range of sample geometries. • It starts from reference FEPEs obtained through a standard calibration procedure. • A model of an HPGe XtRa detector has been
Malusek, Alexandr; Magnusson, Maria; Sandborg, Michael; Alm Carlsson, Gudrun
2017-06-01
To develop and evaluate-in a proof-of-concept configuration-a novel iterative reconstruction algorithm (DIRA) for quantitative determination of elemental composition of patient tissues for application to brachytherapy with low energy (tissue decomposition. The evaluation was done for a phantom derived from the voxelized ICRP 110 male phantom. Soft tissues were decomposed to the lipid, protein and water triplet, bones were decomposed to the compact bone and bone marrow doublet. Projections were derived using the Drasim simulation code for an axial scanning configuration resembling a typical DECT (dual-energy CT) scanner with 80 kV and Sn140 kV x-ray spectra. The iterative loop produced mono-energetic images at 50 and 88 keV without beam hardening artifacts. Different noise levels were considered: no noise, a typical noise level in diagnostic imaging and reduced noise level corresponding to tenfold higher doses. An uncertainty analysis of the results was performed using type A and B evaluations. The two approaches were compared. Linear attenuation coefficients averaged over a region were obtained with relative errors less than 0.5% for all evaluated regions. Errors in average mass fractions of the three-material decomposition were less than 0.04 for no noise and reduced noise levels and less than 0.11 for the typical noise level. Mass fractions of individual pixels were strongly affected by noise, which slightly increased after the first iteration but subsequently stabilized. Estimates of uncertainties in mass fractions provided by the type B evaluation differed from the type A estimates by less than 1.5% for most cases. The algorithm was fast, the results converged after 5 iterations. The algorithmic complexity of forward polyenergetic projection calculation was much reduced by using material doublets and triplets. The simulations indicated that DIRA is capable of determining elemental composition of tissues, which are needed in brachytherapy with low energy (< 50
Akkoç, Betül; Arslan, Ahmet; Kök, Hatice
2017-05-01
One of the first stages in the identification of an individual is gender determination. Through gender determination, the search spectrum can be reduced. In disasters such as accidents or fires, which can render identification somewhat difficult, durable teeth are an important source for identification. This study proposes a smart system that can automatically determine gender using 3D digital maxillary tooth plaster models. The study group was composed of 40 Turkish individuals (20 female, 20 male) between the ages of 21 and 24. Using the iterative closest point (ICP) algorithm, tooth models were aligned, and after the segmentation process, models were transformed into depth images. The local discrete cosine transform (DCT) was used in the process of feature extraction, and the random forest (RF) algorithm was used for the process of classification. Classification was performed using 30 different seeds for random generator values and 10-fold cross-validation. A value of 85.166% was obtained for average classification accuracy (CA) and a value of 91.75% for the area under the ROC curve (AUC). A multi-disciplinary study is performed here that includes computer sciences, medicine and dentistry. A smart system is proposed for the determination of gender from 3D digital models of maxillary tooth plaster models. This study has the capacity to extend the field of gender determination from teeth. Copyright © 2017 Elsevier B.V. All rights reserved.
Gu, Tingwei; Kong, Deren; Jiang, Jian; Shang, Fei; Chen, Jing
2016-12-01
This paper applies back propagation neural network (BPNN) optimized by genetic algorithm (GA) for the prediction of pressure generated by a drop-weight device and the quasi-static calibration of piezoelectric high-pressure sensors for the measurement of propellant powder gas pressure. The method can effectively overcome the slow convergence and local minimum problems of BPNN. Based on test data of quasi-static comparison calibration method, a mathematical model between each parameter of drop-weight device and peak pressure and pulse width was established, through which the practical quasi-static calibration without continuously using expensive reference sensors could be realized. Compared with multiple linear regression method, the GA-BPNN model has higher prediction accuracy and stability. The percentages of prediction error of peak pressure and pulse width are less than 0.7% and 0.3%, respectively.
Radhakrishnan, Mohanasundar
2012-05-01
Concerns have been raised regarding disinfection by-products (DBPs) formed as a result of the reaction of halogen-based disinfectants with DBP precursors. In order to appreciate the chemical and biological tradeoffs, it is imperative to understand the formation trends of DBPs and their spread in the distribution network. However, the water at a point in a complex distribution system is a mixture from various sources, whose proportions are complex to estimate and requires advanced hydraulic analysis. To understand the risks of DBPs and to develop mitigation strategies, it is important to understand the distribution of DBPs in a water network, which requires modelling. The goal of this research was to integrate a steady-state water network model with a particle backtracking algorithm and chlorination as well as DBPs models in order to assess the tradeoffs between biological and chemical risks in the distribution network. A multi-objective optimisation algorithm was used to identify the optimal proportion of water from various sources, dosages of alum, and dosages of chlorine in the treatment plant and in booster locations to control the formation of chlorination DBPs and to achieve a balance between microbial and chemical risks. © IWA Publishing 2012.
Directory of Open Access Journals (Sweden)
Ping Jiang
2015-01-01
Full Text Available With the increasing depletion of fossil fuel and serious destruction of environment, wind power, as a kind of clean and renewable resource, is more and more connected to the power system and plays a crucial role in power dispatch of hybrid system. Thus, it is necessary to forecast wind speed accurately for the operation of wind farm in hybrid system. In this paper, we propose a hybrid model called EEMD-GA-FAC/SAC to forecast wind speed. First, the Ensemble empirical mode decomposition (EEMD can be applied to eliminate the noise of the original data. After data preprocessing, first-order adaptive coefficient forecasting method (FAC or second-order adaptive coefficient forecasting method (SAC can be employed to do forecast. It is significant to select optimal parameters for an effective model. Thus, genetic algorithm (GA is used to determine parameter of the hybrid model. In order to verify the validity of the proposed model, every ten-minute wind speed data from three observation sites in Shandong Peninsula of China and several error evaluation criteria can be collected. Through comparing with traditional BP, ARIMA, FAC, and SAC model, the experimental results show that the proposed hybrid model EEMD-GA-FAC/SAC has the best forecasting performance.
Directory of Open Access Journals (Sweden)
Wendong Yang
2017-01-01
Full Text Available Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting.
Funama, Yoshinori; Utsunomiya, Daisuke; Hirata, Kenichiro; Taguchi, Katsuyuki; Nakaura, Takeshi; Oda, Seitaro; Kidoh, Masafumi; Yuki, Hideaki; Yamashita, Yasuyuki
2017-09-01
To investigate the stabilities of plaque attenuation and coronary lumen for different plaque types, stenotic degrees, lumen densities, and reconstruction methods using coronary vessel phantoms and the visualization of coronary plaques in clinical patients through coronary computed tomography (CT) angiography. We performed 320-detector volume scanning of vessel tubes with stenosis and a tube without stenosis using three types of plaque CT numbers. The stenotic degrees were 50% and 75%. Images were reconstructed with filtered back projection (FBP) and two types of iterative reconstructions (AIDR3D and FIRST [forward-projected model-based iterative reconstruction solution]), with stenotic CT number of approximately 40, 80, and 150 HU (Hounsfield unit), respectively. In each case, the tubing of the coronary vessel was filled with diluted contrast material and distilled water to reach the target lumen CT numbers of approximately 350 HU and 450 HU, and 0 HU, respectively. Peak lumen and plaque CT numbers were measured to calculate the lumen-plaque contrast. In addition, we retrospectively evaluated the image quality with regard to coronary arterial lumen and the plaque in 10 clinical patients on a 4-point scale. At 50% stenosis, the plaque CT number with contrast enhancement increased for FBP and AIDR3D, and the difference in the plaque CT number with and without contrast enhancement was 15-44 HU for FBP and 10-31 HU for AIDR3D. However, the plaque CT number for FIRST had a smaller variation and the difference with and without contrast enhancement was -12 to 8 HU. The visual evaluation score for the vessel lumen was 2.8 ± 0.6, 3.5 ± 0.5, and 3.7 ± 0.5 for FBP, AIDR3D, and FIRST, respectively. The FIRST method controls the increase in plaque density and the lumen-plaque contrast. Consequently, it improves the visualization of coronary plaques in coronary CT angiography. Copyright © 2017 The Association of University Radiologists. Published by
Paduszyński, Kamil
2016-08-22
The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem
Rizzo, D. M.; Hanley, J.; Monroy, C.; Rodas, A.; Stevens, L.; Dorn, P.
2016-12-01
Chagas disease is a deadly, neglected tropical disease that is endemic to every country in Central and South America. The principal insect vector of Chagas disease in Central America is Triatoma dimidiata. EcoHealth interventions are an environmentally friendly alternative that use local materials to lower household infestation, reduce the risk of infestation, and improve the quality of life. Our collaborators from La Universidad de San Carlos de Guatemala along with Ministry of Health Officials reach out to communities with high infestation and teach the community EcoHealth interventions. The process of identifying which interventions have the potential to be most effective as well as the houses that are most at risk is both expensive and time consuming. In order to better identify the risk factors associated with household infestation of T. dimidiata, a number of studies have conducted socioeconomic and entomologic surveys that contain numerous potential risk factors consisting of both nominal and ordinal data. Univariate logistic regression is one of the more popular methods for determining which risk factors are most closely associated with infestation. However, this tool has limitations, especially with the large amount and type of "Big Data" associated with our study sites (e.g., 5 villages comprise of socioeconomic, demographic, and entomologic data). The infestation of a household with T. dimidiata is a complex problem that is most likely not univariate in nature and is likely to contain higher order epistatic relationships that cannot be discovered using univariate logistic regression. Add to this, the problems raised with using p-values in traditional statistics. Also, our T. dimidiata infestation dataset is too large to exhaustively search. Therefore, we use a novel evolutionary algorithm to efficiently search for higher order interactions in surveys associated with households infested with T. dimidiata. In this study, we use our novel evolutionary
Directory of Open Access Journals (Sweden)
Leonardo Bottolo
Full Text Available Genome-wide association studies (GWAS yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s-trait(s associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS. Despite the relatively small size of GHS (n = 3,175, when compared with the largest published meta-GWAS (n > 100,000, GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify
Directory of Open Access Journals (Sweden)
Sid-Ahmed Selouani
2003-07-01
Full Text Available Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR systems. We propose a novel approach which combines the Karhunen-LoÃƒÂ¨ve transform (KLT in the mel-frequency domain with a genetic algorithm (GA to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs varying from 16 dB to Ã¢ÂˆÂ’4 dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.
Ayala, Helon Vicente Hultmann; Coelho, Leandro dos Santos
2016-02-01
The present work introduces a procedure for input selection and parameter estimation for system identification based on Radial Basis Functions Neural Networks (RBFNNs) models with an improved objective function based on the residuals and its correlation function coefficients. We show the results when the proposed methodology is applied to model a magnetorheological damper, with real acquired data, and other two well-known benchmarks. The canonical genetic and differential evolution algorithms are used in cascade to decompose the problem of defining the lags taken as the inputs of the model and its related parameters based on the simultaneous minimization of the residuals and higher orders correlation functions. The inner layer of the cascaded approach is composed of a population which represents the lags on the inputs and outputs of the system and an outer layer represents the corresponding parameters of the RBFNN. The approach is able to define both the inputs of the model and its parameters. This is interesting as it frees the designer of manual procedures, which are time consuming and prone to error, usually done to define the model inputs. We compare the proposed methodology with other works found in the literature, showing overall better results for the cascaded approach.
Mao, Song Shou; Li, Dong; Vembar, Mani; Gao, Yanlin; Luo, Yanting; Lam, Franklin; Syed, Younus Saleem; Liu, Christine; Woo, Kelly; Flores, Fred; Budoff, Matthew J
2014-05-01
The cardiac chamber volumes and functions can be assessed manually and automatically using the current computed tomography (CT) workstation system. We aimed to evaluate the accuracy and precision and to establish the reference values for both segmentation methods using cardiac CT angiography (CTA). A total of 134 subjects (mean age 55.3 years, 72 women) without heart disease were enrolled in the study. The cardiac four-chamber volumes, left ventricular (LV) mass, and biventricular functions were measured with manual, semiautomatic, and model-based fully automatic approaches. The accuracies of the semiautomated and fully automated approaches were validated by comparing them with manual segmentation as a reference. The precision error was determined and compared for both manual and automatic measurements. No significant difference was found between the manual and semiautomatic assessments for the assessment of all functional parameters (P > .05). Using the manual method as a reference, the automatic approach provided a similar value in LV ejection fraction and left atrial volumes in both genders and right ventricular (RV) stroke volume in women (P > .05), with some underestimation of RV volume (P model-based fully automatic segmentation algorithm can help with the assessment of the cardiac four-chamber volume and function. This may help in establishing reference values of functional parameters in patients who undergo cardiac CTA. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Herman, Matthew R; Nejadhashemi, A Pouyan; Daneshvar, Fariborz; Abouali, Mohammad; Ross, Dennis M; Woznicki, Sean A; Zhang, Zhen
2016-10-01
The emission of greenhouse gases continues to amplify the impacts of global climate change. This has led to the increased focus on using renewable energy sources, such as biofuels, due to their lower impact on the environment. However, the production of biofuels can still have negative impacts on water resources. This study introduces a new strategy to optimize bioenergy landscapes while improving stream health for the region. To accomplish this, several hydrological models including the Soil and Water Assessment Tool, Hydrologic Integrity Tool, and Adaptive Neruro Fuzzy Inference System, were linked to develop stream health predictor models. These models are capable of estimating stream health scores based on the Index of Biological Integrity. The coupling of the aforementioned models was used to guide a genetic algorithm to design watershed-scale bioenergy landscapes. Thirteen bioenergy managements were considered based on the high probability of adaptation by farmers in the study area. Results from two thousand runs identified an optimum bioenergy crops placement that maximized the stream health for the Flint River Watershed in Michigan. The final overall stream health score was 50.93, which was improved from the current stream health score of 48.19. This was shown to be a significant improvement at the 1% significant level. For this final bioenergy landscape the most often used management was miscanthus (27.07%), followed by corn-soybean-rye (19.00%), corn stover-soybean (18.09%), and corn-soybean (16.43%). The technique introduced in this study can be successfully modified for use in different regions and can be used by stakeholders and decision makers to develop bioenergy landscapes that maximize stream health in the area of interest. Copyright © 2016 Elsevier Ltd. All rights reserved.
Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris
2017-09-01
Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.
Zhao, Y.; Su, X. H.; Wang, M. H.; Li, Z. Y.; Li, E. K.; Xu, X.
2017-08-01
Water resources vulnerability control management is essential because it is related to the benign evolution of socio-economic, environmental and water resources system. Research on water resources system vulnerability is helpful to realization of water resources sustainable utilization. In this study, the DPSIR framework of driving forces-pressure-state-impact-response was adopted to construct the evaluation index system of water resources system vulnerability. Then the co-evolutionary genetic algorithm and projection pursuit were used to establish evaluation model of water resources system vulnerability. Tengzhou City in Shandong Province was selected as a study area. The system vulnerability was analyzed in terms of driving forces, pressure, state, impact and response on the basis of the projection value calculated by the model. The results show that the five components all belong to vulnerability Grade II, the vulnerability degree of impact and state were higher than other components due to the fierce imbalance in supply-demand and the unsatisfied condition of water resources utilization. It is indicated that the influence of high speed socio-economic development and the overuse of the pesticides have already disturbed the benign development of water environment to some extents. While the indexes in response represented lower vulnerability degree than the other components. The results of the evaluation model are coincident with the status of water resources system in the study area, which indicates that the model is feasible and effective.
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA
Model-Based Testing of Probabilistic Systems
Gerhold, Marcus; Stoelinga, Mariëlle Ida Antoinette; Stevens, Perdita; Wasowski, Andzej
This paper presents a model-based testing framework for probabilistic systems. We provide algorithms to generate, execute and evaluate test cases from a probabilistic requirements model. In doing so, we connect ioco-theory for model-based testing and statistical hypothesis testing: our ioco-style
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Energy Technology Data Exchange (ETDEWEB)
Van Uytven, Eric, E-mail: eric.vanuytven@cancercare.mb.ca; Van Beek, Timothy [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCowan, Peter M. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2, Canada and Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Chytyk-Praznik, Krista [Medical Physics Department, Nova Scotia Cancer Centre, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308 (Australia); Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); McCurdy, Boyd M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)
2015-12-15
Purpose: Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). Methods: EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results: Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). Conclusions: An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient
Van Uytven, Eric; Van Beek, Timothy; McCowan, Peter M; Chytyk-Praznik, Krista; Greer, Peter B; McCurdy, Boyd M C
2015-12-01
Radiation treatments are trending toward delivering higher doses per fraction under stereotactic radiosurgery and hypofractionated treatment regimens. There is a need for accurate 3D in vivo patient dose verification using electronic portal imaging device (EPID) measurements. This work presents a model-based technique to compute full three-dimensional patient dose reconstructed from on-treatment EPID portal images (i.e., transmission images). EPID dose is converted to incident fluence entering the patient using a series of steps which include converting measured EPID dose to fluence at the detector plane and then back-projecting the primary source component of the EPID fluence upstream of the patient. Incident fluence is then recombined with predicted extra-focal fluence and used to calculate 3D patient dose via a collapsed-cone convolution method. This method is implemented in an iterative manner, although in practice it provides accurate results in a single iteration. The robustness of the dose reconstruction technique is demonstrated with several simple slab phantom and nine anthropomorphic phantom cases. Prostate, head and neck, and lung treatments are all included as well as a range of delivery techniques including VMAT and dynamic intensity modulated radiation therapy (IMRT). Results indicate that the patient dose reconstruction algorithm compares well with treatment planning system computed doses for controlled test situations. For simple phantom and square field tests, agreement was excellent with a 2%/2 mm 3D chi pass rate ≥98.9%. On anthropomorphic phantoms, the 2%/2 mm 3D chi pass rates ranged from 79.9% to 99.9% in the planning target volume (PTV) region and 96.5% to 100% in the low dose region (>20% of prescription, excluding PTV and skin build-up region). An algorithm to reconstruct delivered patient 3D doses from EPID exit dosimetry measurements was presented. The method was applied to phantom and patient data sets, as well as for dynamic IMRT and
Directory of Open Access Journals (Sweden)
Juan Carlos Montoya M.
2008-06-01
Full Text Available Multicast juega un papel muy importante para soportar una nueva generación de aplicaciones. En la actualidad y por diferentes razones, técnicas y no técnicas, multicast IP no ha sido totalmente adoptado en Internet. Durante los últimos a˜nos, un área de investigación activa es la de implementar este tipo de tráfico desde la perspectiva del nivel de aplicación, donde la funcionalidad de multicast no es responsabilidad de los enrutadores sino de los hosts, a lo que se le conoce como Multicast Overlay Network (MON. En este artículo se plantea el enrutamiento en MON como un problema de Optimización Multiobjetivo (MOP donde se optimizan dos funciones: 1 el retardo total extremo a extremo del árbol multicast, y 2 la máxima utilización de los enlaces. La optimización simultánea de estas dos funciones es un problema NP completo y para resolverlo se propone utilizar Algoritmos Evolutivos Multiobjetivos (MOEA, específicamente NSGAIMulticast plays an important role in supporting a new generation of applications. At present and for different reasons, technical and non–technical, multicast IP hasn’t yet been totally adopted for Internet. During recent years, an active area of research is that of implementing this kind of traffic in the application layer where the multicast functionality isn´t a responsibility of the routers but that of the hosts, which we know as Multicast Overlay Networks (MON. In this article, routing in an MON is put forward as a multiobjective optimization problem (MOP where two functions are optimized: 1 the total end to end delay of the multicast tree and 2 the maximum link utilization. The simultaneous optimization of these two functions is an NP–Complete problem and to solve this we suggest using Multiobjective Evolutionary Algorithms (MOEA, specifically NSGA–II.
Evolutionary computation for reinforcement learning
Whiteson, S.; Wiering, M.; van Otterlo, M.
2012-01-01
Algorithms for evolutionary computation, which simulate the process of natural selection to solve optimization problems, are an effective tool for discovering high-performing reinforcement-learning policies. Because they can automatically find good representations, handle continuous action spaces,
Sun, Jihang; Zhang, Qifeng; Hu, Di; Duan, Xiaomin; Peng, Yun
2015-06-01
Full model-based iterative reconstruction (MBIR) algorithm decreasing image noise and improving spatial resolution significantly, combined with low voltage scan may improve image and vessels quality. To evaluate the image quality improvement of pulmonary vessels using a full MBIR in low-dose chest computed tomography (CT) for children. This study was institutional review board approved. Forty-one children (age range, 28 days-6 years, mean age, 2.0 years) who underwent 80 kVp low-dose CT scans were included. Age-dependent noise index (NI) for a 5-mm slice thickness image was used for the acquisition: NI = 11 for 0-12 months old, NI = 13 for 1-2 years old, and NI = 15 for 3-6 years old. Images were retrospectively reconstructed into thin slice thickness of 0.625 mm using the MBIR and a conventional filtered back projection (FBP) algorithm. Two radiologists independently evaluated images subjectively focusing on the ability to display small arteries and diagnosis confidence on a 5-point scale with 3 being clinically acceptable. CT value and image noise in the descending aorta, muscle and fat were measured and statistically compared between the two reconstruction groups. The ability to display small vessels was significantly improved with the MBIR reconstruction. The subjective scores of displaying small vessels were 5.0 and 3.7 with MBIR and FBP, respectively, while the respective diagnosis confidence scores were 5.0 and 3.8. Quantitative image noise for the 0.625 mm slice thickness images in the descending aorta was 15.8 ± 3.8 HU in MBIR group, 57.3% lower than the 37.0 ± 7.3 HU in FBP group. The signal-to-noise ratio and contrast-to-noise ratio for the descending aorta were 28.3 ± 7.9 and 24.05 ± 7.5 in MBIR group, and 12.1 ± 3.7 and 10.6 ± 3.5 in FBP group, respectively. These values were improved by 133.9% and 132.1%, respectively, with MBIR reconstruction compared to FBP reconstruction. Compared to the conventional FBP reconstruction, the image quality and
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
2017-08-12
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Energy Technology Data Exchange (ETDEWEB)
Gomez Hernandez, Jose Alberto
2001-11-15
The purpose of evaluating the reliability of Electric Power Systems is to estimate the ability of the system to carry out their function of taking the energy from the generating stations to the load points. This involves the reliability of generation sources and transmission that affect in the transfer of power through the transmission system that bears load loss and voltage sags between the generation and the consumption centers. In this thesis a hybrid methodology that optimize the reliability in systems generation -transmission using evolutionary algorithms is developed. This technique of optimization determines the optimum number of components (parallel redundancy in lines) and shunt compensation in load nodes necessary to maximize reliability, subject to cost restrictions, and considering security conditions in steady state, using the smallest singular value technique. The objective function will be defined as stochastic function, where the measure of interests is the smallest singular value of the Jacobian matrix of power flows solution of the most severe event according to the evaluation of reliability of the generation transmission system, this formulation is a combination of integer and continuous non linear programming, where the conventional mathematical programming algorithms present difficulties in robustness and global optimal search. The fault in generation units is determined by using the state sampling together with the transmission system by Monte Carlo simulation for a desired load level. For events where violations in security exist (lines loading, violation in voltage in load nodes and violation in reactive power of generation nodes) a model of active and reactive power dispatch is used in order to correct these violations by means of the exact penalty function linear programming technique to proceeded to determine the stability of voltage in steady state by means of the smallest singular value technique and the participation factors of nodes
DEFF Research Database (Denmark)
Damsbo, Martin; Kinnear, Brian S; Hartings, Matthew R
2004-01-01
We present an evolutionary method for finding the low-energy conformations of polypeptides. The application, called FOLDAWAY,is based on a generic framework and uses several evolutionary operators as well as local optimization to navigate the complex energy landscape of polypeptides. It maintains...... mobility measurements. It has a flat energy landscape where helical and globular conformations have similar energies. FOLDAWAY locates several large groups of structures not found in previous molecular dynamics simulations for this peptide, including compact globular conformations, which are probably...
Evolutionary Statistical Procedures
Baragona, Roberto; Poli, Irene
2011-01-01
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a
Chevalier, Robert L
2017-05-01
Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as "maladaptive." In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic) adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ~40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons), evolutionary selection for APOL1 mutations (that provide resistance to trypanosome infection, a tradeoff), and modern life experience (Western diet mismatch leading to diabetes and hypertension). Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo), developmental programming and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.
Gabora, Liane; Kauffman, Stuart
2016-04-01
Dietrich and Haider (Psychonomic Bulletin & Review, 21 (5), 897-915, 2014) justify their integrative framework for creativity founded on evolutionary theory and prediction research on the grounds that "theories and approaches guiding empirical research on creativity have not been supported by the neuroimaging evidence." Although this justification is controversial, the general direction holds promise. This commentary clarifies points of disagreement and unresolved issues, and addresses mis-applications of evolutionary theory that lead the authors to adopt a Darwinian (versus Lamarckian) approach. To say that creativity is Darwinian is not to say that it consists of variation plus selection - in the everyday sense of the term - as the authors imply; it is to say that evolution is occurring because selection is affecting the distribution of randomly generated heritable variation across generations. In creative thought the distribution of variants is not key, i.e., one is not inclined toward idea A because 60 % of one's candidate ideas are variants of A while only 40 % are variants of B; one is inclined toward whichever seems best. The authors concede that creative variation is partly directed; however, the greater the extent to which variants are generated non-randomly, the greater the extent to which the distribution of variants can reflect not selection but the initial generation bias. Since each thought in a creative process can alter the selective criteria against which the next is evaluated, there is no demarcation into generations as assumed in a Darwinian model. We address the authors' claim that reduced variability and individuality are more characteristic of Lamarckism than Darwinian evolution, and note that a Lamarckian approach to creativity has addressed the challenge of modeling the emergent features associated with insight.
Graph Model Based Indoor Tracking
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin
2009-01-01
The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...
Directory of Open Access Journals (Sweden)
Robert L. Chevalier
2017-05-01
Full Text Available Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as “maladaptive.” In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or from evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ∼40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons, evolutionary selection for APOL1 mutations (which provide resistance to trypanosome infection, a tradeoff, and modern life experience (Western diet mismatch leading to diabetes and hypertension. Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout the life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo, developmental programming, and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.
Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Hua, Jian
2017-07-01
ONU schedule algorithm and ONU transfer mechanism for multi-subsystem-based VPONs' management is proposed in this paper. To avoid frequent wavelength switch and realize high system stability, ONU schedule algorithm is presented for wavelength allocation by introducing box-filling model. At the same time, judgement mechanism is designed to filter wavelength-increased request caused by slight bandwidth fluctuation of VPON. To share remained bandwidth among VPONs, ONU transfer mechanism is put forward according to flexible wavelength routing. To manage wavelength resource of entire network and wavelength requirement from VPONs, information-managed matrix model is constructed. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.
Model Based Reconstruction of UT Array Data
Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.
2008-02-01
Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).
Directory of Open Access Journals (Sweden)
Gregory Gorelik
2014-10-01
Full Text Available In this article, we advance the concept of “evolutionary awareness,” a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities—which we refer to as “intergenerational extended phenotypes”—by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.
Directory of Open Access Journals (Sweden)
José Alexandre F. Diniz-Filho
2013-10-01
Full Text Available Macroecology focuses on ecological questions at broad spatial and temporal scales, providing a statistical description of patterns in species abundance, distribution and diversity. More recently, historical components of these patterns have begun to be investigated more deeply. We tentatively refer to the practice of explicitly taking species history into account, both analytically and conceptually, as ‘evolutionary macroecology’. We discuss how the evolutionary dimension can be incorporated into macroecology through two orthogonal and complementary data types: fossils and phylogenies. Research traditions dealing with these data have developed more‐or‐less independently over the last 20–30 years, but merging them will help elucidate the historical components of diversity gradients and the evolutionary dynamics of species’ traits. Here we highlight conceptual and methodological advances in merging these two research traditions and review the viewpoints and toolboxes that can, in combination, help address patterns and unveil processes at temporal and spatial macro‐scales.
DEFF Research Database (Denmark)
Nash, Ulrik William
2014-01-01
cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary......, they are correlated among people who share environments because these individuals satisfice within their cognitive bounds by using cues in order of validity, as opposed to using cues arbitrarily. Any difference in expectations thereby arise from differences in cognitive ability, because two individuals with identical......The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover...
Wjst, M
2013-12-01
Evolutionary medicine allows new insights into long standing medical problems. Are we "really stoneagers on the fast lane"? This insight might have enormous consequences and will allow new answers that could never been provided by traditional anthropology. Only now this is made possible using data from molecular medicine and systems biology. Thereby evolutionary medicine takes a leap from a merely theoretical discipline to practical fields - reproductive, nutritional and preventive medicine, as well as microbiology, immunology and psychiatry. Evolutionary medicine is not another "just so story" but a serious candidate for the medical curriculum providing a universal understanding of health and disease based on our biological origin. © Georg Thieme Verlag KG Stuttgart · New York.
Evolutionary constrained optimization
Deb, Kalyanmoy
2015-01-01
This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...
Maleki, Afshin; Daraei, Hiua; Alaei, Loghman; Faraji, Aram
2014-01-01
Four stepwise multiple linear regressions (SMLR) and a genetic algorithm (GA) based multiple linear regressions (MLR), together with artificial neural network (ANN) models, were applied for quantitative structure-activity relationship (QSAR) modeling of dissociation constants (Kd) of 62 arylsulfonamide (ArSA) derivatives as human carbonic anhydrase II (HCA II) inhibitors. The best subsets of molecular descriptors were selected by SMLR and GA-MLR methods. These selected variables were used to generate MLR and ANN models. The predictability power of models was examined by an external test set and cross validation. In addition, some tests were done to examine other aspects of the models. The results show that for certain purposes GA-MLR is better than SMLR and for others, ANN overcomes MLR models.
Evolutionary institutionalism.
Fürstenberg, Dr Kai
Institutions are hard to define and hard to study. Long prominent in political science have been two theories: Rational Choice Institutionalism (RCI) and Historical Institutionalism (HI). Arising from the life sciences is now a third: Evolutionary Institutionalism (EI). Comparative strengths and weaknesses of these three theories warrant review, and the value-to-be-added by expanding the third beyond Darwinian evolutionary theory deserves consideration. Should evolutionary institutionalism expand to accommodate new understanding in ecology, such as might apply to the emergence of stability, and in genetics, such as might apply to political behavior? Core arguments are reviewed for each theory with more detailed exposition of the third, EI. Particular attention is paid to EI's gene-institution analogy; to variation, selection, and retention of institutional traits; to endogeneity and exogeneity; to agency and structure; and to ecosystem effects, institutional stability, and empirical limitations in behavioral genetics. RCI, HI, and EI are distinct but complementary. Institutional change, while amenable to rational-choice analysis and, retrospectively, to criticaljuncture and path-dependency analysis, is also, and importantly, ecological. Stability, like change, is an emergent property of institutions, which tend to stabilize after change in a manner analogous to allopatric speciation. EI is more than metaphorically biological in that institutional behaviors are driven by human behaviors whose evolution long preceded the appearance of institutions themselves.
Rowe, Sidney E.
2010-01-01
In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.
Evolutionary Computation and Its Applications in Neural and Fuzzy Systems
Directory of Open Access Journals (Sweden)
Biaobiao Zhang
2011-01-01
Full Text Available Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.
Directory of Open Access Journals (Sweden)
Jiangqing Liao
2016-11-01
Full Text Available Ultrasonic-assisted extraction (UAE of quercetin and rutin from the stalks of Euonymus alatus (Thunb. Sieb in our laboratory, which aimed at evaluating and optimizing the process parameters, was investigated in this work. In addition, process parameters such as ethanol solution concentration, solvent volume/sample ratio, ultrasound power and extraction time, ultrasound frequency and extraction temperature were also first applied for evaluating the influence of extraction of quercetin and rutin. Optimum process parameters obtained were: ethanol solution 60%, extraction time 30 min, solvent volume/sample ratio 40 mL/g, ultrasound power 200 W, extraction temperature 30 °C and ultrasound frequency 80 kHz. Further a hybrid predictive model, which is based on least squares support vector machine (LS-SVM in combination with improved fruit fly optimization algorithm (IFOA, was first used to predict the UAE process. The established IFOA-LS-SVM model, in which six process parameters and extraction yields of quercetin and rutin were used as input variables and output variables, respectively, successfully predicted the extraction yields of quercetin and rutin with a low error. Moreover, by comparison with SVM, LS-SVM and multiple regression models, IFOA-LS-SVM model has higher accuracy and faster convergence. Results proved that the proposed model is capable of predicting extraction yields of quercetin and rutin in UAE process.
Evolutionary Dynamics of Biological Games
Nowak, Martin A.; Sigmund, Karl
2004-02-01
Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.
Directory of Open Access Journals (Sweden)
Taimoor Zahid
2016-09-01
Full Text Available Battery energy storage management for electric vehicles (EV and hybrid EV is the most critical and enabling technology since the dawn of electric vehicle commercialization. A battery system is a complex electrochemical phenomenon whose performance degrades with age and the existence of varying material design. Moreover, it is very tedious and computationally very complex to monitor and control the internal state of a battery’s electrochemical systems. For Thevenin battery model we established a state-space model which had the advantage of simplicity and could be easily implemented and then applied the least square method to identify the battery model parameters. However, accurate state of charge (SoC estimation of a battery, which depends not only on the battery model but also on highly accurate and efficient algorithms, is considered one of the most vital and critical issue for the energy management and power distribution control of EV. In this paper three different estimation methods, i.e., extended Kalman filter (EKF, particle filter (PF and unscented Kalman Filter (UKF, are presented to estimate the SoC of LiFePO4 batteries for an electric vehicle. Battery’s experimental data, current and voltage, are analyzed to identify the Thevenin equivalent model parameters. Using different open circuit voltages the SoC is estimated and compared with respect to the estimation accuracy and initialization error recovery. The experimental results showed that these online SoC estimation methods in combination with different open circuit voltage-state of charge (OCV-SoC curves can effectively limit the error, thus guaranteeing the accuracy and robustness.
Initialization strategies and diversity in evolutionary timetabling.
Burke, E K; Newall, J P; Weare, R F
1998-01-01
This document seeks to provide a scientific basis by which different initialization algorithms for evolutionary timetabling may be compared. Seeding the initial population may be used to improve initial quality and provide a better starting point for the evolutionary algorithm. This must be tempered against the consideration that if the seeding algorithm produces very similar solutions, then the loss of genetic diversity may well lead to a worse final solution. Diversity, we hope, provides a good indication of how good the final solution will be, although only by running the evolutionary algorithm will the exact result be found. We will investigate the effects of heuristic seeding by taking quality and diversity measures of populations generated by heuristic initialization methods on both random and real-life data, as well as assessing the long-term performance of an evolutionary algorithm (found to work well on the timetabling problem) when using heuristic initialization. This will show how the use of heuristic initialization strategies can substantially improve the performance of evolutionary algorithms for the timetabling problem.
Algorithms and Algorithmic Languages.
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
An Evolutionary Game Theory Model of Spontaneous Brain Functioning
National Research Council Canada - National Science Library
Dario Madeo; Agostino Talarico; Alvaro Pascual-Leone; Chiara Mocenni; Emiliano Santarnecchi
2017-01-01
... conditions, making its understanding of fundamental importance in modern neuroscience. Here we present a theoretical and mathematical model based on an extension of evolutionary game theory on networks (EGN...
Electro-photographic-model-based halftoning
Goyal, Puneet; Gupta, Madhur; Shaked, Doron; Staelin, Carl; Fischer, Mani; Shacham, Omri; Jodra, Rodolfo; Allebach, Jan
2010-01-01
Most halftoning algorithms assume there is no interaction between neighboring dots or if there is, it is additive. Without accounting for dot-gain effect, the printed image will not have the appearance predicted by the halftoning algorithm. Thus, there is need to embed a printer model in the halftoning algorithm which can predict such deviations and develop a halftone accordingly. The direct binary search (DBS) algorithm employs a search heuristic to minimize the mean squared perceptually filtered error between the halftone and continuous-tone original images. We incorporate a measurementbased stochastic model for dot interactions of an electro-photographic printer within the iterative DBS binary halftoning algorithm. The stochastic model developed is based on microscopic absorptance and variance measurements. We present an efficient strategy to estimate the impact of 5×5 neighborhood pixels on the central pixel absorptance. By including the impact of 5×5 neighborhood pixels, the average relative error between the predicted tone and tone observed is reduced from around 21% to 4%. Also, the experimental results show that electrophotography-model based halftoning reduces the mottle and banding artifacts.
Fundamentals of natural computing basic concepts, algorithms, and applications
de Castro, Leandro Nunes
2006-01-01
Introduction A Small Sample of Ideas The Philosophy of Natural Computing The Three Branches: A Brief Overview When to Use Natural Computing Approaches Conceptualization General Concepts PART I - COMPUTING INSPIRED BY NATURE Evolutionary Computing Problem Solving as a Search Task Hill Climbing and Simulated Annealing Evolutionary Biology Evolutionary Computing The Other Main Evolutionary Algorithms From Evolutionary Biology to Computing Scope of Evolutionary Computing Neurocomputing The Nervous System Artif
Buckland, Stephen Terrence; Oedekoven, Cornelia Sabrina; Borchers, David Louis
2015-01-01
CSO was part-funded by EPSRC/NERC Grant EP/1000917/1. Conventional distance sampling adopts a mixed approach, using model-based methods for the detection process, and design-based methods to estimate animal abundance in the study region, given estimated probabilities of detection. In recent years, there has been increasing interest in fully model-based methods. Model-based methods are less robust for estimating animal abundance than conventional methods, but offer several advantages: they ...
Principles of models based engineering
Energy Technology Data Exchange (ETDEWEB)
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Efficient evolutionary algorithms for optimal control
López Cruz, I.L.
2002-01-01
If optimal control problems are solved by means of gradient based local search methods, convergence to local solutions is likely. Recently, there has been an increasing interest in the use
Designers' Cognitive Thinking Based on Evolutionary Algorithms
Zhang Shutao; Jianning Su; Chibing Hu; Peng Wang
2013-01-01
The research on cognitive thinking is important to construct the efficient intelligent design systems. But it is difficult to describe the model of cognitive thinking with reasonable mathematical theory. Based on the analysis of design strategy and innovative thinking, we investigated the design cognitive thinking model that included the external guide thinking of "width priority - depth priority" and the internal dominated thinking of "divergent thinking - convergent thinking", built a reaso...
Evolutionary developmental psychology
National Research Council Canada - National Science Library
King, Ashley C; Bjorklund, David F
2010-01-01
The field of evolutionary developmental psychology can potentially broaden the horizons of mainstream evolutionary psychology by combining the principles of Darwinian evolution by natural selection...
Directory of Open Access Journals (Sweden)
Chira Camelia
2011-07-01
Full Text Available Abstract Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods.
2011-01-01
Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP) model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods. PMID:21801435
Model based design introduction: modeling game controllers to microprocessor architectures
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
Model Based Fault Detection in a Centrifugal Pump Application
DEFF Research Database (Denmark)
Kallesøe, Carsten; Cocquempot, Vincent; Izadi-Zamanabadi, Roozbeh
2006-01-01
A model based approach for fault detection in a centrifugal pump, driven by an induction motor, is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, observer design and Analytical Redundancy Relation (ARR) design. Structural considerations...... is capable of detecting four different faults in the mechanical and hydraulic parts of the pump....
Behavior Emergence in Autonomous Robot Control by Means of Evolutionary Neural Networks
Neruda, Roman; Slušný, Stanislav; Vidnerová, Petra
We study the emergence of intelligent behavior of a simple mobile robot. Robot control system is realized by mechanisms based on neural networks and evolutionary algorithms. The evolutionary algorithm is responsible for the adaptation of a neural network parameters based on the robot's performance in a simulated environment. In experiments, we demonstrate the performance of evolutionary algorithm on selected problems, namely maze exploration and discrimination of walls and cylinders. A comparison of different networks architectures is presented and discussed.
Model-Based Reasoning in Humans Becomes Automatic with Training
Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J.
2015-01-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load—a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders. PMID:26379239
Model-Based Reasoning in Humans Becomes Automatic with Training.
Directory of Open Access Journals (Sweden)
Marcos Economides
2015-09-01
Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.
Model-based Software Engineering
DEFF Research Database (Denmark)
Kindler, Ekkart
2010-01-01
The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...
Pulmonary CT image classification with evolutionary programming.
Madsen, M T; Uppaluri, R; Hoffman, E A; McLennan, G
1999-12-01
It is often difficult to classify information in medical images from derived features. The purpose of this research was to investigate the use of evolutionary programming as a tool for selecting important features and generating algorithms to classify computed tomographic (CT) images of the lung. Training and test sets consisting of 11 features derived from multiple lung CT images were generated, along with an indicator of the target area from which features originated. The images included five parameters based on histogram analysis, 11 parameters based on run length and co-occurrence matrix measures, and the fractal dimension. Two classification experiments were performed. In the first, the classification task was to distinguish between the subtle but known differences between anterior and posterior portions of transverse lung CT sections. The second classification task was to distinguish normal lung CT images from emphysematous images. The performance of the evolutionary programming approach was compared with that of three statistical classifiers that used the same training and test sets. Evolutionary programming produced solutions that compared favorably with those of the statistical classifiers. In separating the anterior from the posterior lung sections, the evolutionary programming results were better than two of the three statistical approaches. The evolutionary programming approach correctly identified all the normal and abnormal lung images and accomplished this by using less features than the best statistical method. The results of this study demonstrate the utility of evolutionary programming as a tool for developing classification algorithms.
Handling Continuous Attributes in an Evolutionary Inductive Learner.
Divina, F.; Marchiori, E.
2005-01-01
This paper analyzes experimentally discretization algorithms for handling continuous attributes in evolutionary learning. We consider a learning system that induces a set of rules in a fragment of first-order logic (evolutionary inductive logic programming), and introduce a method where a given
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Learning Intelligent Genetic Algorithms Using Japanese Nonograms
Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen
2012-01-01
An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…
Investigation on Evolutionary Computation Techniques of a Nonlinear System
Directory of Open Access Journals (Sweden)
Tran Trong Dao
2011-01-01
Full Text Available The main aim of this work is to show that such a powerful optimizing tool like evolutionary algorithms (EAs can be in reality used for the simulation and optimization of a nonlinear system. A nonlinear mathematical model is required to describe the dynamic behaviour of batch process; this justifies the use of evolutionary method of the EAs to deal with this process. Four algorithms from the field of artificial intelligent—differential evolution (DE, self-organizing migrating algorithm (SOMA, genetic algorithm (GA, and simulated annealing (SA—are used in this investigation. The results show that EAs are used successfully in the process optimization.
QDist—Quartet Distance Between Evolutionary Trees
DEFF Research Database (Denmark)
Mailund; Pedersen, Christian N. Storm
2004-01-01
QDist is a program for computing the quartet distance between two unrooted evolutionary trees, i.e. the number of quartet topology differences between the two trees, where a quartet topology is the topological subtree induced by four species. The implementation is based on an algorithm with running...... time O(n log² n), which makes it practical to compare large trees....
Algorithms Introduction to Algorithms
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...
Cloud Particles Evolution Algorithm
Directory of Open Access Journals (Sweden)
Wei Li
2015-01-01
Full Text Available Many evolutionary algorithms have been paid attention to by the researchers and have been applied to solve optimization problems. This paper presents a new optimization method called cloud particles evolution algorithm (CPEA to solve optimization problems based on cloud formation process and phase transformation of natural substance. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The cloud is composed of descript and independent particles in this algorithm. The cloud particles use phase transformation of three states to realize the global exploration and the local exploitation in the optimization process. Moreover, the cloud particles not only realize the survival of the fittest through competition mechanism but also ensure the diversity of the cloud particles by reciprocity mechanism. The effectiveness of the algorithm is validated upon different benchmark problems. The proposed algorithm is compared with a number of other well-known optimization algorithms, and the experimental results show that cloud particles evolution algorithm has a higher efficiency than some other algorithms.
Directory of Open Access Journals (Sweden)
Clavel Quintana
2010-06-01
Full Text Available Almeida, Amarilla y Barán estudiaron el problema de minimización multiobjetivo de costo a corto, mediano y largo plazo para la ubicación de centrales telefónicas en la ciudad de la Asunción, Paraguay, determinando el número de centrales y la ubicación ´optima de estas centrales usando el algoritmo evolutivo SPEAII. Proponemos en este trabajo aplicar el algoritmo evolutivo NSGAII al problema de ubicar centrales telefónicas en Cabudare, Estado Lara, Venezuela; demostrando que constituye una opción valida para abordar el problema multiobjetivo. // Abstract: Almeida, Amarilla and Baran studied the problem of cost-minimization multi-objective in short, medium and long term for the location of telephone exchanges in the city of Asuncion, Paraguay, determining the number of central and convenient location of these plants SPEAII using evolutionary algorithm. We propose in this paper to apply the evolutionary algorithm NSGAII the problem of locating telephone exchanges in Cabudare, Estado Lara, Venezuela; demonstrating that an option to address multiobjective problems.
A Cultural Algorithm for POMDPs from Stochastic Inventory Control
Prestwich, S.; Tarim, S.A.; Rossi, R.; Hnich, B.
2008-01-01
Reinforcement Learning algorithms such as SARSA with an eligibility trace, and Evolutionary Computation methods such as genetic algorithms, are competing approaches to solving Partially Observable Markov Decision Processes (POMDPs) which occur in many fields of Artificial Intelligence. A powerful
Markov Networks in Evolutionary Computation
Shakya, Siddhartha
2012-01-01
Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs). EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...
Evolutionary molecular medicine.
Nesse, Randolph M; Ganten, Detlev; Gregory, T Ryan; Omenn, Gilbert S
2012-05-01
Evolution has long provided a foundation for population genetics, but some major advances in evolutionary biology from the twentieth century that provide foundations for evolutionary medicine are only now being applied in molecular medicine. They include the need for both proximate and evolutionary explanations, kin selection, evolutionary models for cooperation, competition between alleles, co-evolution, and new strategies for tracing phylogenies and identifying signals of selection. Recent advances in genomics are transforming evolutionary biology in ways that create even more opportunities for progress at its interfaces with genetics, medicine, and public health. This article reviews 15 evolutionary principles and their applications in molecular medicine in hopes that readers will use them and related principles to speed the development of evolutionary molecular medicine.
Directory of Open Access Journals (Sweden)
Jing Chen
2013-01-01
Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.
Statistical shape model-based femur kinematics from biplane fluoroscopy
DEFF Research Database (Denmark)
Baka, N.; de Bruijne, Marleen; Walsum, T. van
2012-01-01
could potentially lower costs and radiation dose. Therefore, we propose to substitute the segmented bone surface with a statistical shape model based estimate. A dedicated dynamic reconstruction and tracking algorithm was developed estimating the shape based on all frames, and pose per frame....... The algorithm minimizes the difference between the projected bone contour and image edges. To increase robustness, we employ a dynamic prior, image features, and prior knowledge about bone edge appearances. This enables tracking and reconstruction from a single initial pose per sequence. We evaluated our method...
Model-Based Development of Control Systems for Forestry Cranes
Directory of Open Access Journals (Sweden)
Pedro La Hera
2015-01-01
Full Text Available Model-based methods are used in industry for prototyping concepts based on mathematical models. With our forest industry partners, we have established a model-based workflow for rapid development of motion control systems for forestry cranes. Applying this working method, we can verify control algorithms, both theoretically and practically. This paper is an example of this workflow and presents four topics related to the application of nonlinear control theory. The first topic presents the system of differential equations describing the motion dynamics. The second topic presents nonlinear control laws formulated according to sliding mode control theory. The third topic presents a procedure for model calibration and control tuning that are a prerequisite to realize experimental tests. The fourth topic presents the results of tests performed on an experimental crane specifically equipped for these tasks. Results of these studies show the advantages and disadvantages of these control algorithms, and they highlight their performance in terms of robustness and smoothness.
Multiobjective Multifactorial Optimization in Evolutionary Multitasking.
Gupta, Abhishek; Ong, Yew-Soon; Feng, Liang; Tan, Kay Chen
2016-05-03
In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry.
Directory of Open Access Journals (Sweden)
Ina Schieferdecker
2012-02-01
Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.
Near-Minimal Node Control of Networked Evolutionary Games
Riehl, James Robert; Cao, Ming
2014-01-01
We investigate a problem related to the controllability of networked evolutionary games, first presenting an algorithm that computes a near-minimal set of nodes to drive all nodes in a tree network to a desired strategy, and then briefly discussing an algorithm that works for arbitrary networks
Learning from evolutionary optimization by retracing search paths
van der Walle, P.; Savolainen, Janne; Kuipers, L.; Herek, Jennifer Lynn
2009-01-01
Evolutionary search algorithms are used routinely to find optimal solutions for multi-parameter problems, such as complex pulse shapes in coherent control experiments. The algorithms are based on evolving a set of trial solutions iteratively until an optimum is reached, at which point the experiment
Computing the Quartet Distance Between Evolutionary Trees in Time O(n log n)
DEFF Research Database (Denmark)
Brodal, Gerth Sølfting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm
2003-01-01
Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, McMorris, and ...... unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n. The previous best algorithm for the problem uses time O(n 2)....
Model Based Autonomy for Robust Mars Operations
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
A Process Algebra Genetic Algorithm
Karaman, Sertac; Shima, Tal; Frazzoli, Emilio
2011-01-01
A genetic algorithm that utilizes process algebra for coding of solution chromosomes and for defining evolutionary based operators is presented. The algorithm is applicable to mission planning and optimization problems. As an example the high level mission planning for a cooperative group of uninhabited aerial vehicles is investigated. The mission planning problem is cast as an assignment problem, and solutions to the assignment problem are given in the form of chromosomes that are manipulate...
Embracing model-based designs for dose-finding trials.
Love, Sharon B; Brown, Sarah; Weir, Christopher J; Harbron, Chris; Yap, Christina; Gaschler-Markefski, Birgit; Matcham, James; Caffrey, Louise; McKevitt, Christopher; Clive, Sally; Craddock, Charlie; Spicer, James; Cornelius, Victoria
2017-07-25
Dose-finding trials are essential to drug development as they establish recommended doses for later-phase testing. We aim to motivate wider use of model-based designs for dose finding, such as the continual reassessment method (CRM). We carried out a literature review of dose-finding designs and conducted a survey to identify perceived barriers to their implementation. We describe the benefits of model-based designs (flexibility, superior operating characteristics, extended scope), their current uptake, and existing resources. The most prominent barriers to implementation of a model-based design were lack of suitable training, chief investigators' preference for algorithm-based designs (e.g., 3+3), and limited resources for study design before funding. We use a real-world example to illustrate how these barriers can be overcome. There is overwhelming evidence for the benefits of CRM. Many leading pharmaceutical companies routinely implement model-based designs. Our analysis identified barriers for academic statisticians and clinical academics in mirroring the progress industry has made in trial design. Unified support from funders, regulators, and journal editors could result in more accurate doses for later-phase testing, and increase the efficiency and success of clinical drug development. We give recommendations for increasing the uptake of model-based designs for dose-finding trials in academia.
Remembering the evolutionary Freud.
Young, Allan
2006-03-01
Throughout his career as a writer, Sigmund Freud maintained an interest in the evolutionary origins of the human mind and its neurotic and psychotic disorders. In common with many writers then and now, he believed that the evolutionary past is conserved in the mind and the brain. Today the "evolutionary Freud" is nearly forgotten. Even among Freudians, he is regarded to be a red herring, relevant only to the extent that he diverts attention from the enduring achievements of the authentic Freud. There are three ways to explain these attitudes. First, the evolutionary Freud's key work is the "Overview of the Transference Neurosis" (1915). But it was published at an inopportune moment, forty years after the author's death, during the so-called "Freud wars." Second, Freud eventually lost interest in the "Overview" and the prospect of a comprehensive evolutionary theory of psychopathology. The publication of The Ego and the Id (1923), introducing Freud's structural theory of the psyche, marked the point of no return. Finally, Freud's evolutionary theory is simply not credible. It is based on just-so stories and a thoroughly discredited evolutionary mechanism, Lamarckian use-inheritance. Explanations one and two are probably correct but also uninteresting. Explanation number three assumes that there is a fundamental difference between Freud's evolutionary narratives (not credible) and the evolutionary accounts of psychopathology that currently circulate in psychiatry and mainstream journals (credible). The assumption is mistaken but worth investigating.
Evolutionary Computation Methods and their applications in Statistics
Directory of Open Access Journals (Sweden)
Francesco Battaglia
2013-05-01
Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
Model-Based Method for Sensor Validation
Vatan, Farrokh
2012-01-01
Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).
Model-based equipment diagnosis
Collins, David J.; Strojwas, Andrzej J.; Mozumder, P. K.
1994-09-01
A versatile methodology is described in which equipment models have been incorporated into a single process diagnostic system for the PECVD of silicon nitride. The diagnosis system has been developed and tested with data collected using an Applied Materials Precision 5000 single wafer reactor. The parametric equipment diagnosis system provides the basis for optimal control of multiple process responses by the classification of potential sources of equipment faults without the assistance of in-situ sensor data. The basis for the diagnosis system is the use of tuned empirical equipment models which have been developed using a physically-based experimental design. Nine individual site-specific models were used to provide an effective method of modeling the spatially-dependent process variations across the wafer with better sensitivity than mean-based models. The diagnostic system has been tested using data that was produced by adjusting the actual equipment controls to artificially simulate a variety of possible subtle equipment drifts and shifts. Statistical algorithms have been implemented which detect equipment drift, shift and variance stability faults using the difference between the predicted process responses to the off-line measured process responses. Fault classification algorithms have been developed to classify the most likely causes for the process drifts and shifts using a pattern recognition system based upon flexible discriminant analysis.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 11. Evolutionary Biology Today - The Domain of Evolutionary Biology. Amitabh Joshi. Series Article Volume 7 Issue 11 November 2002 pp 8-17. Fulltext. Click here to view fulltext PDF. Permanent link:
Indian Academy of Sciences (India)
Amitabh Joshi studies and teaches evolutionary ' genetics and population ecology at the Jawaharlal. Nehru Centre for Advanced. Scientific Research,. Bangalore. His current research interests are in life- history, evolution, the evolutionary genetics of biological clocks, the evolution of ecological specialization dynamics. He.
Evolutionary humanoid robotics
Eaton, Malachy
2015-01-01
This book examines how two distinct strands of research on autonomous robots, evolutionary robotics and humanoid robot research, are converging. The book will be valuable for researchers and postgraduate students working in the areas of evolutionary robotics and bio-inspired computing.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 2. Evolutionary Biology Today - What do Evolutionary Biologists do? Amitabh Joshi. Series Article Volume 8 Issue 2 February 2003 pp 6-18. Fulltext. Click here to view fulltext PDF. Permanent link:
Applying evolutionary anthropology
Gibson, Mhairi A; Lawson, David W
2015-01-01
Evolutionary anthropology provides a powerful theoretical framework for understanding how both current environments and legacies of past selection shape human behavioral diversity. This integrative and pluralistic field, combining ethnographic, demographic, and sociological methods, has provided new insights into the ultimate forces and proximate pathways that guide human adaptation and variation. Here, we present the argument that evolutionary anthropological studies of human behavior also h...
MELEC: Meta-Level Evolutionary Composer
Directory of Open Access Journals (Sweden)
Andres Calvo
2011-02-01
Full Text Available Genetic algorithms (GA’s are global search mechanisms that have been applied to many disciplines including music composition. Computer system MELEC composes music using evolutionary computation on two levels: the object and the meta. At the object-level, MELEC employs GAs to compose melodic motifs and iteratively refine them through evolving generations. At the meta-level, MELEC forms the overall musical structure by concatenating the generated motifs in an order that depends on the evolutionary process. In other words, the structure of the music is determined by a geneological traversal of the algorithm’s execution sequence. In this implementation, we introduce a new data structure that tracks the execution of the GA, the Genetic Algorithm Traversal Tree, and uses its traversal to define the musical structure. Moreover, we employ a Fibonacci-based fitness function to shape the melodic evolution.
Advances of evolutionary computation methods and operators
Cuevas, Erik; Oliva Navarro, Diego Alberto
2016-01-01
The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be eﬀective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.
Massively parallel evolutionary computation on GPGPUs
Tsutsui, Shigeyoshi
2013-01-01
Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u
A Model-based Prognostics Approach Applied to Pneumatic Valves
Directory of Open Access Journals (Sweden)
Matthew J. Daigle
2011-01-01
Full Text Available Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Energy Technology Data Exchange (ETDEWEB)
Hernandez Galicia, Julio A.; Nieva Gomez, Rolando [Instituto de Investigaciones Electricas, Temixco, Morelos (Mexico)
2001-07-01
In the present work it is considered the mathematical formulation of the problem of the reactive compensation planning, The solution technique based on evolutionary programming is described and the results of compensation in the Northwest subsystem of the Mexican electrical system are shown. A technique of optimization based on the Evolutionary Programming is proposed to solve the problem of the Planning of the Reactive Compensation in transmission of electrical energy networks. The problem consists in determining how much compensation to add and where to locate it in such a way that the investment cost of the compensation equipment is diminished, plus the operation costs associated to the transmission losses, plus a penalty function associated to the violations of the operative limits of voltage. The compensation that is determined must allow that the network operates in normal conditions before any contingency of a pre-established assembly. The problem considered is non-linear and whole compound. Tests made to a representative system of the Northwest area of the Mexican electrical system of 171 nodes and 284 branches are reported. [Spanish] En el presente trabajo se plantea la formulacion matematica del problema de planificacion de la compensacion reactiva, se describe la tecnica de solucion basada en programacion evolutiva y se muestra resultados de compensacion en el subsistema Noroeste del sistema electrico mexicano. Se propone una tecnica de optimizacion basada en la Programacion Evolutiva para resolver el problema de la Planificacion de la Compensacion Reactiva en redes de transmision de energia electrica. El problema consiste en determinar cuanta compensacion agregar y donde ubicarla de tal manera que se minimice el costo de inversion del equipo de compensacion, mas los costos de operacion asociados a las perdidas de transmision, mas una funcion de penalizacion asociada a la violaciones de los limites operativos de voltaje. La compensacion que se determine
Expediting model-based optoacoustic reconstructions with tomographic symmetries
Energy Technology Data Exchange (ETDEWEB)
Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel, E-mail: dr@tum.de [Institute for Biological and Medical Imaging (IBMI), Helmholtz Center Munich, Ingolstädter Landstrasse 1, 85764 Neuherberg (Germany); Faculty of Medicine, Technical University of Munich, Ismaninger Strasse 22, 81675 Munich (Germany)
2014-01-15
Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated.
Modeling evolutionary games in populations with demographic structure
DEFF Research Database (Denmark)
Li, Xiang-Yi; Giaimo, Stefano; Baudisch, Annette
2015-01-01
Classic life history models are often based on optimization algorithms, focusing on the adaptation of survival and reproduction to the environment, while neglecting frequency dependent interactions in the population. Evolutionary game theory, on the other hand, studies frequency dependent strategy...... interactions, but usually omits life history and the demographic structure of the population. Here we show how an integration of both aspects can substantially alter the underlying evolutionary dynamics. We study the replicator dynamics of strategy interactions in life stage structured populations. Individuals...
The Complexity of Constructing Evolutionary Trees Using Experiments
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Pedersen, Christian N.S.
2001-01-01
We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(nd logd n) using at most n⌈d/2⌉(log2⌈d/2⌉-1 n+O(1)) experiments for d > 2, and at most n(log n...
Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference.
Suchow, Jordan W; Bourgin, David D; Griffiths, Thomas L
2017-07-01
Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Applications of evolutionary computation in image processing and pattern recognition
Cuevas, Erik; Perez-Cisneros, Marco
2016-01-01
This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...
Eco-evolutionary feedbacks, adaptive dynamics and evolutionary rescue theory.
Ferriere, Regis; Legendre, Stéphane
2013-01-19
Adaptive dynamics theory has been devised to account for feedbacks between ecological and evolutionary processes. Doing so opens new dimensions to and raises new challenges about evolutionary rescue. Adaptive dynamics theory predicts that successive trait substitutions driven by eco-evolutionary feedbacks can gradually erode population size or growth rate, thus potentially raising the extinction risk. Even a single trait substitution can suffice to degrade population viability drastically at once and cause 'evolutionary suicide'. In a changing environment, a population may track a viable evolutionary attractor that leads to evolutionary suicide, a phenomenon called 'evolutionary trapping'. Evolutionary trapping and suicide are commonly observed in adaptive dynamics models in which the smooth variation of traits causes catastrophic changes in ecological state. In the face of trapping and suicide, evolutionary rescue requires that the population overcome evolutionary threats generated by the adaptive process itself. Evolutionary repellors play an important role in determining how variation in environmental conditions correlates with the occurrence of evolutionary trapping and suicide, and what evolutionary pathways rescue may follow. In contrast with standard predictions of evolutionary rescue theory, low genetic variation may attenuate the threat of evolutionary suicide and small population sizes may facilitate escape from evolutionary traps.
3-D model-based tracking for UAV indoor localization.
Teulière, Céline; Marchand, Eric; Eck, Laurent
2015-05-01
This paper proposes a novel model-based tracking approach for 3-D localization. One main difficulty of standard model-based approach lies in the presence of low-level ambiguities between different edges. In this paper, given a 3-D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localization problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights.
Evolutionary Mechanisms for Loneliness
Cacioppo, John T.; Cacioppo, Stephanie; Boomsma, Dorret I.
2013-01-01
Robert Weiss (1973) conceptualized loneliness as perceived social isolation, which he described as a gnawing, chronic disease without redeeming features. On the scale of everyday life, it is understandable how something as personally aversive as loneliness could be regarded as a blight on human existence. However, evolutionary time and evolutionary forces operate at such a different scale of organization than we experience in everyday life that personal experience is not sufficient to understand the role of loneliness in human existence. Research over the past decade suggests a very different view of loneliness than suggested by personal experience, one in which loneliness serves a variety of adaptive functions in specific habitats. We review evidence on the heritability of loneliness and outline an evolutionary theory of loneliness, with an emphasis on its potential adaptive value in an evolutionary timescale. PMID:24067110
Evolutionary behavioral genetics.
Zietsch, Brendan P; de Candia, Teresa R; Keller, Matthew C
2015-04-01
We describe the scientific enterprise at the intersection of evolutionary psychology and behavioral genetics-a field that could be termed Evolutionary Behavioral Genetics-and how modern genetic data is revolutionizing our ability to test questions in this field. We first explain how genetically informative data and designs can be used to investigate questions about the evolution of human behavior, and describe some of the findings arising from these approaches. Second, we explain how evolutionary theory can be applied to the investigation of behavioral genetic variation. We give examples of how new data and methods provide insight into the genetic architecture of behavioral variation and what this tells us about the evolutionary processes that acted on the underlying causal genetic variants.
Marine mammals: evolutionary biology
National Research Council Canada - National Science Library
Berta, Annalisa; Sumich, James L; Kovacs, Kit M
2015-01-01
The third edition of Marine Mammals: Evolutionary Biology provides a comprehensive and current assessment of the diversity, evolution, and biology of marine mammals, while highlighting the latest tools and techniques for their study...
Palmisano, Fabrizio; Elia, Angelo
2017-10-01
One of the main difficulties, when dealing with landslide structural vulnerability, is the diagnosis of the causes of crack patterns. This is also due to the excessive complexity of models based on classical structural mechanics that makes them inappropriate especially when there is the necessity to perform a rapid vulnerability assessment at the territorial scale. This is why, a new approach, based on a ‘simple model’ (i.e. the Load Path Method, LPM), has been proposed by Palmisano and Elia for the interpretation of the behaviour of masonry buildings subjected to landslide-induced settlements. However, the LPM is very useful for rapidly finding the 'most plausible solution' instead of the exact solution. To find the solution, optimization algorithms are necessary. In this scenario, this article aims to show how the Bidirectional Evolutionary Structural Optimization method by Huang and Xie, can be very useful to optimize the strut-and-tie models obtained by using the Load Path Method.
20 frames per second model-based reconstruction in cross-sectional optoacoustic tomography
Ding, Lu; Deán-Ben, Xosé Luís.; Razansky, Daniel
2017-03-01
In order to achieve real-time image rendering, optoacoustic tomography reconstructions are commonly done with back-projection algorithms due to their simplicity and low computational complexity. However, model-based algorithms have been shown to attain more accurate reconstruction performance due to their ability to model arbitrary detection geometries, transducer shapes and other experimental factors. The high computational complexity of the model-based schemes makes it challenging to be implemented for real time inversion. Herein, we introduce a novel discretization method for model-based optoacoustic tomography that enables its efficient parallel implementation on graphics processing units with extremely low memory overhead. We demonstrate that, when employing a tomographic scanner with 256 detectors, the new method achieves model-based optoacoustic inversion at 20 frames per second for a 200 × 200 image grid.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Logistics Enterprise Evaluation Model Based On Fuzzy Clustering Analysis
Fu, Pei-hua; Yin, Hong-bo
In this thesis, we introduced an evaluation model based on fuzzy cluster algorithm of logistics enterprises. First of all,we present the evaluation index system which contains basic information, management level, technical strength, transport capacity,informatization level, market competition and customer service. We decided the index weight according to the grades, and evaluated integrate ability of the logistics enterprises using fuzzy cluster analysis method. In this thesis, we introduced the system evaluation module and cluster analysis module in detail and described how we achieved these two modules. At last, we gave the result of the system.
On the Effect of Populations in Evolutionary Multi-Objective Optimisation
DEFF Research Database (Denmark)
Giel, Oliver; Lehre, Per Kristian
2010-01-01
. Rigorous runtime analysis points out an exponential runtime gap between the population-based algorithm Simple Evolutionary Multi-objective Optimiser (SEMO) and several single individual-based algorithms on this problem. This means that among the algorithms considered, only the population-based MOEA...
Analytical Model-based Fault Detection and Isolation in Control Systems
DEFF Research Database (Denmark)
Vukic, Z.; Ozbolt, H.; Blanke, M.
1998-01-01
The paper gives an introduction and an overview of the field of fault detection and isolation for control systems. The summary of analytical (quantitative model-based) methodds and their implementation are presented. The focus is given to mthe analytical model-based fault-detection and fault...... diagnosis methods, often viewed as the classical or deterministic ones. Emphasis is placed on the algorithms suitable for ship automation, unmanned underwater vehicles, and other systems of automatic control....
A comparison between optimisation algorithms for metal forming processes
Bonte, M.H.A.; Do, T.T.; Fourment, L.; van den Boogaard, Antonius H.; Huetink, Han; Habbal, A.
2006-01-01
Coupling optimisation algorithms to Finite Element (FEM) simulations is a very promising way to achieve optimal metal forming processes. However, many optimisation algorithms exist and it is not clear which of these algorithms to use. This paper compares an efficient Metamodel Assisted Evolutionary
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
A framework for evolutionary systems biology.
Loewe, Laurence
2009-02-24
Many difficult problems in evolutionary genomics are related to mutations that have weak effects on fitness, as the consequences of mutations with large effects are often simple to predict. Current systems biology has accumulated much data on mutations with large effects and can predict the properties of knockout mutants in some systems. However experimental methods are too insensitive to observe small effects. Here I propose a novel framework that brings together evolutionary theory and current systems biology approaches in order to quantify small effects of mutations and their epistatic interactions in silico. Central to this approach is the definition of fitness correlates that can be computed in some current systems biology models employing the rigorous algorithms that are at the core of much work in computational systems biology. The framework exploits synergies between the realism of such models and the need to understand real systems in evolutionary theory. This framework can address many longstanding topics in evolutionary biology by defining various 'levels' of the adaptive landscape. Addressed topics include the distribution of mutational effects on fitness, as well as the nature of advantageous mutations, epistasis and robustness. Combining corresponding parameter estimates with population genetics models raises the possibility of testing evolutionary hypotheses at a new level of realism. EvoSysBio is expected to lead to a more detailed understanding of the fundamental principles of life by combining knowledge about well-known biological systems from several disciplines. This will benefit both evolutionary theory and current systems biology. Understanding robustness by analysing distributions of mutational effects and epistasis is pivotal for drug design, cancer research, responsible genetic engineering in synthetic biology and many other practical applications.
A framework for evolutionary systems biology
Directory of Open Access Journals (Sweden)
Loewe Laurence
2009-02-01
Full Text Available Abstract Background Many difficult problems in evolutionary genomics are related to mutations that have weak effects on fitness, as the consequences of mutations with large effects are often simple to predict. Current systems biology has accumulated much data on mutations with large effects and can predict the properties of knockout mutants in some systems. However experimental methods are too insensitive to observe small effects. Results Here I propose a novel framework that brings together evolutionary theory and current systems biology approaches in order to quantify small effects of mutations and their epistatic interactions in silico. Central to this approach is the definition of fitness correlates that can be computed in some current systems biology models employing the rigorous algorithms that are at the core of much work in computational systems biology. The framework exploits synergies between the realism of such models and the need to understand real systems in evolutionary theory. This framework can address many longstanding topics in evolutionary biology by defining various 'levels' of the adaptive landscape. Addressed topics include the distribution of mutational effects on fitness, as well as the nature of advantageous mutations, epistasis and robustness. Combining corresponding parameter estimates with population genetics models raises the possibility of testing evolutionary hypotheses at a new level of realism. Conclusion EvoSysBio is expected to lead to a more detailed understanding of the fundamental principles of life by combining knowledge about well-known biological systems from several disciplines. This will benefit both evolutionary theory and current systems biology. Understanding robustness by analysing distributions of mutational effects and epistasis is pivotal for drug design, cancer research, responsible genetic engineering in synthetic biology and many other practical applications.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
A constraint consensus memetic algorithm for solving constrained optimization problems
Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.
2014-11-01
Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.
Proteomics in evolutionary ecology.
Baer, B; Millar, A H
2016-03-01
Evolutionary ecologists are traditionally gene-focused, as genes propagate phenotypic traits across generations and mutations and recombination in the DNA generate genetic diversity required for evolutionary processes. As a consequence, the inheritance of changed DNA provides a molecular explanation for the functional changes associated with natural selection. A direct focus on proteins on the other hand, the actual molecular agents responsible for the expression of a phenotypic trait, receives far less interest from ecologists and evolutionary biologists. This is partially due to the central dogma of molecular biology that appears to define proteins as the 'dead-end of molecular information flow' as well as technical limitations in identifying and studying proteins and their diversity in the field and in many of the more exotic genera often favored in ecological studies. Here we provide an overview of a newly forming field of research that we refer to as 'Evolutionary Proteomics'. We point out that the origins of cellular function are related to the properties of polypeptide and RNA and their interactions with the environment, rather than DNA descent, and that the critical role of horizontal gene transfer in evolution is more about coopting new proteins to impact cellular processes than it is about modifying gene function. Furthermore, post-transcriptional and post-translational processes generate a remarkable diversity of mature proteins from a single gene, and the properties of these mature proteins can also influence inheritance through genetic and perhaps epigenetic mechanisms. The influence of post-transcriptional diversification on evolutionary processes could provide a novel mechanistic underpinning for elements of rapid, directed evolutionary changes and adaptations as observed for a variety of evolutionary processes. Modern state-of the art technologies based on mass spectrometry are now available to identify and quantify peptides, proteins, protein
Improved shape-signature and matching methods for model-based robotic vision
Schwartz, J. T.; Wolfson, H. J.
1987-01-01
Researchers describe new techniques for curve matching and model-based object recognition, which are based on the notion of shape-signature. The signature which researchers use is an approximation of pointwise curvature. Described here is curve matching algorithm which generalizes a previous algorithm which was developed using this signature, allowing improvement and generalization of a previous model-based object recognition scheme. The results and the experiments described relate to 2-D images. However, natural extensions to the 3-D case exist and are being developed.
Paleoanthropology and evolutionary theory.
Tattersall, Ian
2012-01-01
Paleoanthropologists of the first half of the twentieth century were little concerned either with evolutionary theory or with the technicalities and broader implications of zoological nomenclature. In consequence, the paleoanthropological literature of the period consisted largely of a series of descriptions accompanied by authoritative pronouncements, together with a huge excess of hominid genera and species. Given the intellectual flimsiness of the resulting paleoanthropological framework, it is hardly surprising that in 1950 the ornithologist Ernst Mayr met little resistance when he urged the new postwar generation of paleoanthropologists to accept not only the elegant reductionism of the Evolutionary Synthesis but a vast oversimplification of hominid phylogenetic history and nomenclature. Indeed, the impact of Mayr's onslaught was so great that even when developments in evolutionary biology during the last quarter of the century brought other paleontologists to the realization that much more has been involved in evolutionary histories than the simple action of natural selection within gradually transforming lineages, paleoanthropologists proved highly reluctant to follow. Even today, paleoanthropologists are struggling to reconcile an intuitive realization that the burgeoning hominid fossil record harbors a substantial diversity of species (bringing hominid evolutionary patterns into line with that of other successful mammalian families), with the desire to cram a huge variety of morphologies into an unrealistically minimalist systematic framework. As long as this theoretical ambivalence persists, our perception of events in hominid phylogeny will continue to be distorted.
Applying evolutionary anthropology.
Gibson, Mhairi A; Lawson, David W
2015-01-01
Evolutionary anthropology provides a powerful theoretical framework for understanding how both current environments and legacies of past selection shape human behavioral diversity. This integrative and pluralistic field, combining ethnographic, demographic, and sociological methods, has provided new insights into the ultimate forces and proximate pathways that guide human adaptation and variation. Here, we present the argument that evolutionary anthropological studies of human behavior also hold great, largely untapped, potential to guide the design, implementation, and evaluation of social and public health policy. Focusing on the key anthropological themes of reproduction, production, and distribution we highlight classic and recent research demonstrating the value of an evolutionary perspective to improving human well-being. The challenge now comes in transforming relevance into action and, for that, evolutionary behavioral anthropologists will need to forge deeper connections with other applied social scientists and policy-makers. We are hopeful that these developments are underway and that, with the current tide of enthusiasm for evidence-based approaches to policy, evolutionary anthropology is well positioned to make a strong contribution. © 2015 Wiley Periodicals, Inc.